Published: 10 April 2020
General Strategy Outline
The following is the general workflow of how the wokflow will look like:
(1) Create an AMI with NodeJS and Codedeploy Agent installed
(2) Create an appspec.yml file (and relating script files) for Codedeploy to use during deployment
(3) An Auto Scaling Group (ASG) out of a Launch Configuration will be created with an Application Load Balancer (ALB) on the front
(4) A new code revision will be pushed to bitbucket or github which will be picked up circleCI
(5) Once all tests are done in circleCI, we will zip our code and send it to S3
(6) CircleCI will then also tell AWS Codedeploy to start a new deployment by using the code folder in S3
(7) AWS CodeDeploy will pull down the code from S3 and put it inside newly created EC2 instances and redirect all traffic to them. Later, it will terminate all instances with the old copy of the code
Creating a NodeJS Server on Single EC2 Instance
Let's assume we have a nodeJS server which is listening on port 4000 and is going to send the following response back whenever a GET request is made to its home route:
{ version: 1.0.0 }
We now need to create an EC2 Linux instance on AWS where we can load this server manually for the first time. This can be done by connecting to the EC2 instance via SSH (make sure EC2 security groups allow this) and installing NodeJS and GIT on the server:
# for nvm installation
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
# for nvm activation
. ~/.nvm/nvm.sh
# for nodeJS installation
nvm install node
# check node installed
sudo su
node -v
# for GIT installation
sudo yum install git
#OR
# sudo su
# yum install git
Using GIT, we can now clone our respositry for our nodeJS server on to EC2 instance and run npm install for installing any dependecies and then run our start script to start the server.
Installing Codedeploy agent
Installation of Codedeploy agent is needed so Codedeploy can pull our code from S3 on to our EC2 instances. However, this will still require us to give AWS Codedeploy an IAM role so it can access S3. We can use the default AWSCodeDeployRole for this purpose.
We also need another IAM role for our EC2 instances because ultimately Codedeploy agent is running inside EC2. So, we must allow EC2 instances access to our folder in S3 where our code and subsequent revisions will be stored.
Following are the steps to install Codedeploy agent on EC2:
sudo yum update
sudo yum install ruby
sudo yum install wget
cd /home/ec2-user # if you aren't in that directory already
wget https://bucket-name.s3.region-identifier.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
# check agent is installed properly
sudo service codedeploy-agent status
# output can be - The AWS CodeDeploy agent is running
where,
bucket-name --> is the name of the Amazon S3 bucket that contains the CodeDeploy Resource Kit files for your region (for sydney region - aws-codedeploy-ap-southeast-2),
region-identifier --> is the identifier for your region (for sydney - ap-southeast-2)
So, url for Sydney region will become https://aws-codedeploy-ap-southeast-2.s3.ap-southeast-2.amazonaws.com/latest/install
See the link below to find the full CodeDeploy Resource Kit Reference list for all regions and respective bucket identifiers.
At this point, our server has everything it needs so lets take an image snapshot and create an Amazon Machine Image (AMI) out of it for future use. We can then shut down this single EC2 instance.
Creating Auto Scaling Group (ASG)
To create an ASG, we first need to create a Launch Configuration (LC) in which we specify what type of instances (AMI) do we want to create. We will use our custom AMI from previous step for this and will also set up some other related settings like Target groups.
NOTE: Make sure to attach the IAM role you created for your EC2 instances (to access S3) during the Launch Configuration Step. If you forget it, you can also add it later on but you will have to manually go through each server one at a time and attach the role.
Target Groups contain some properties about the servers that they will be applied to. For example, we can say all servers running inside TargetGroupXYZ will be listening on their port 4000 and for their health check settings, we want to send 5 consecutive requests, sapced at 10s intervals all of which need to be successful. Target groups are generally created ahead of time and then use in LCs.
NOTE: Make sure you health check is configured to ping a path that is configured in your server. Health check will ping GET / by default, so make sure your server either supports this path or use a different path.
Using the launch configuration and after specifying our desired scaling policies, we can now launch our ASG.
Launching Application Load balancer (ALB)
We need to setup a primary listener (usually port 80 is good) that the ALB will receive traffic on. We should specify all Availability Zones (AZs) for our load balancer so it remains highly available.
We then setup target groups that this ALB will forward the traffic to. Every server in a target group will have the same designated port number it respond to traffic on (in our case that is 4000).
After this, we use the same target group and attach it to our ASG, so every server in our ASG now is part of the target group and hence by extension is receiving traffic from our ALB.
Creating AppSpec.yml for Codedeploy
Codedeploy needs appspec.yml file as part of our code so it can follow the instructions and start the server for us automatically with the new code. It lives at the root of our project.
Appspec file contains certain sections as follows:
version: 0.0
os: linux
files:
- source: /
destination: "/home/ec2-user/my-app"
hooks:
ApplicationStop:
- location: "scripts/stop_server.sh"
timeout: 60
runas: ec2-user
AfterInstall:
- location: "scripts/install_dependencies.sh"
timeout: 300
runas: ec2-user
ApplicationStart:
- location: "scripts/start_server.sh"
timeout: 60
runas: ec2-user
Each hook also gives us the option to specify a script file we want to execute. For this project, let's say we have put all our scripts in a scripts folder.
# stop_server.sh
# source is used to help us run node and npm commands without assuming root level permission
source /home/ec2-user/.bash_profile
# This stops any running pm2 servers
pm2 stop nishant-server
NOTE: ApplicationStop hook doesn't run the VERY first time we add code to our EC2 instance via codedeploy. So, we won't ever get errors like pm2 command not found, since we install pm2 in later hooks.
# install_depedencies
source /home/ec2-user/.bash_profile
cd /home/ec2-user/my-app/
npm install pm2@latest -g
npm install
# start_server.sh
source /home/ec2-user/.bash_profile
cd /home/ec2-user/my-app/
pm2 start index.js --name nishant-server
NOTE: we are using pm2 because we need nodeJS server to run in the background, otherwise which ever script we designate to run it, will shut down after the script timeout is over and that will shut our server down.
NOTE: Make sure to NOT add Shebang on top of scripts files, as it will create errors in execution context inside Codedeploy and deployment will fail.
Setting up Codedeploy
Inside Codedeploy, make sure to create an application and a deployment group with the desired configuration. We will be creating a deployment within the deployment group in the next step.
Remember to pick Blue Green Deployment type since it will first spin up all the new instances before ever terminating any of the instances with the old code. It can also create a copy of the autoscaling group inside which it launches all servers first (this is to ensure we don't pollute our existing ASG in case the deployment fails).
CircleCI Magic
# circleci config.yml
version: 2.1
orbs:
aws-cli: circleci/aws-cli@1.0.0
jobs:
nodejs_api:
executor: aws-cli/default
steps:
- checkout
# run tests here if needed or create a new job to precede this job
- aws-cli/install
- run: |
aws deploy push \
--application-name nishant-asg-cicd \
--description "from cicleCI to S3 with love" \
--ignore-hidden-files \
--s3-location s3://node-cicd-asg-s3/NodeJS-Server.zip \
--source ./
- run: |
aws deploy create-deployment \
--application-name nishant-asg-cicd \
--deployment-config-name CodeDeployDefault.AllAtOnce \
--deployment-group-name asg-deployment-group \
--description "first time from circleCI" \
--file-exists-behavior OVERWRITE \
--s3-location bucket=node-cicd-asg-s3,bundleType=zip,key=NodeJS-Server.zip \
--auto-rollback-configuration enabled=true,events="DEPLOYMENT_FAILURE"
workflows:
asg_ec2_workflow:
jobs:
- nodejs_api:
context: AWS
# can listen to different branches here by using filters
Okay a lot is going on here, so let's break it down.
We are using aws-cli orb since it helps with easy installation of the cli. We could use machine as an executor which comes with pre-installed copy of the cli and skip the orb altogether.
NOTE: When using this orb, we must also use executor for the job as aws-cli/default instead of machine or docker.
NOTE: Make sure to provide context under the job inside workflows, this will pick up the context environment variables which you have specified for AWSSECRETACCESSKEY, AWSACCESSKEY and AWSREGION. They will show up masked inside the terminal of CircleCI for added security.
aws-cli/install installs the cli.
run | with the pipe at the end allows us to write multi-part command. The rest of the AWS command comes directly from their docs.
First run command, takes the current directory, zips it up and sends it to an S3 bucket we nominate. A thing to note is to remember to add --ignore-hidden-files
flag which prevents the .git folder, .DStore and other hidden files or folder to not be compressed.
Second run command, is pushing a deployment through Codedeploy to our ASG.
--deployment-config-name comes directly from Codedeploy, potential values can be CodeDeployDefault.AllAtOnce, CodeDeployDefault.OneAtATime etc.
--deployment-group-name comes from setting up codedeploy in the previous step.
--s3-location consists of few things like bucket which is the name of the bucket to target, bundleType is zip in our case & key is the name of our code bundle.
--auto-rollback-configuration ensures that if deployment fails, the last known good version of the code is deployed to the new instances.
Handling Different Branches
We often times will need to start our server for staging or for production. However appspec.yml file will only run the start_server.sh file in both cases.
To combat this, we must create 2 separate deployment groups - one for each productions and staging.
We can then use the in-built environment variables with CodeDeploy (reference link below for full list) to use the deployment group names to decide which command to run.
# start_server.sh
source /home/ec2-user/.bash_profile
cd /home/ec2-user/my-app/
# assuming name of my deployment group for staging is --> staging group
if ["${DEPLOYMENT_GROUP_NAME}" = "staging-group"]
then
NODE_ENV=staging pm2 start index.js --name nishant-server
else
NODE_ENV=prod pm2 start index.js --name nishant-server
fi
# notice the use of NODE_ENV to pass node related environment variables at run time with pm2
Potential Errors
(1) The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems
This means soemthing has gone wrong with your roles and permissions, or code deploy agent or you just need a bigger instance to run your scripts. You will also find all Hooks inside Codedeploy are being skipped over due to this error.
(2) npm command not found
This means you don't have access to run npm, make sure you are using the source
command to reference your bash_profile or running as root user.
(3) Deployment stuck on AllowTraffic event
Check if health checks are pinging a valid endpoint in your application and if your load balancer is in a healthy state.
(4) Extra EC2 instances after Blue-Green Deployment fails (which keep coming up even if you terminate them)
If you use Blue green deployment in Codedeploy together with Copy-ASG option and if the deployment fails, you will see a copy of your ASG which will hold the same number of instances as your actual ASG. This is because Codedeploy creates a copy of your ASG to begin the deployment but doesn't terminate the copied ASG if deployment fails.
This can be fixed by either manually deleting the extra ASG or by running a lambda which listens to an SNS topic. Every time a deployment fails, we can write a notification to SNS which can trigger the Lambda which will clean up the extra ASG.
Deployment Fails --> SNS --> Lambda --> Teminates ASG
// Sample Lambda Function
const AWS = require("aws-sdk");
const util = require("util);
const autoscaling = new AWS.AutoScaling({ apiVersion: "2011-01-01" });
// need to convert this to a promise since callbacks dont work for this
autoscaling.deleteAutoScalingGroup = util.promisify(autoscaling.deleteAutoScalingGroup.bind(autoscaling));
exports.handler = async event => {
console.log("EVENT event.Records[0].Sns--->", event.Records[0].Sns);
const sns_msg = event.Records[0].Sns.Message;
if (!sns_msg) return;
const parsed_sns_msg = JSON.parse(sns_msg);
if (parsed_sns_msg.status !== "FAILED") return;
const { deploymentGroupName, deploymentId } = parsed_sns_msg;
const params = {
AutoScalingGroupName: `CodeDeploy_${deploymentGroupName}_${deploymentId}`,
ForceDelete: true, // needed to delete instances with ASG
}
try {
const resp = await autoscaling.deleteAutoScalingGroup(params);
console.log("ASG DELETED SUCCESSFULLY")
} catch(err){
console.log("ERROR in deleting ASG-->", err)
}
};
AppSpec permissions section reference
CodeDeploy Resource Kit Reference
Setting up NodeJS on single EC2 instance
Target Groups Register/De-register with Elastic Load Balancer
Rollbacks during Codedeploy Deployments