Continuous Deployment with AWS CodePipeline and Chef Zero

In this article I will show how you can use AWS CodePipeline and Chef Zero to implement a blue-green continuous deployment model to automatically release changes to your EC2 hosted web application.

AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.

Chef Zero

We will use Chef to define the state of each EC2 instance that hosts our web application. At I have created a very simple cookbook that will install Apache Tomcat and deliver some custom files to be served by the web application.

AWS CodeBuild

If you are familiar with Chef but not with AWS CodePipeline you might be curious about the buildspec.yml file in the cookbook repository.

This file is used in the build phase of our pipeline to instruct AWS CodeBuild to install Chef Workstation and package our cookbooks using Berkshelf.

AWS CodeDeploy

Another file of note is the appspec.yml file. This is used in the deploy phase of the pipeline to instruct AWS CodeDeploy how to update / install our web application code.

In this example the appspec.yml file instructs CodeDeploy to execute scripts/ and it is in this file where we run Chef Zero.

Terraform Code

In order to create a pipeline in AWS CodePipeline we first need to create some prerequisite AWS resources such as an AWS CodeCommit repository to store our application code as well as the AWS CodeBuild and AWS CodeDeploy resources I mentioned earlier. Additionally, we need a running web application to actually deploy our code to. I have created some Terraform code at that will provision these resources for us.

$ mkdir ~/codepipeline-demo
$ cd ~/codepipeline-demo
$ git clone terraform
$ cd terraform
$ vi terraform.tfvars
$ terraform init
Terraform has been successfully initialized!

$ terraform apply
data.template_file.user-data: Refreshing state...

Plan: 28 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_security_group.asg: Creating...
aws_iam_role.codebuild: Creating...
Apply complete! Resources: 28 added, 0 changed, 0 destroyed.

AWS CodeCommit

An additional prerequisite step is to publish the cookbook code to the CodeCommit repository that was created by Terraform. First, ensure your working environment is configured to work with CodeCommit by refering to this document.

Now you can clone my cookbook code from GitLab.

$ cd ~/codepipeline-demo
$ git clone colmmg-chef

Next, get the HTTPS clone URL of your CodeCommit repository from the AWS Console.

You can now push my cookbook code to your repository by running the following.

$ cd ~/codepipeline-demo
$ git clone <<YOUR_HTTPS_CLONE_URL>> chef
$ cd chef
$ cp -r ../colmmg-chef/* .
$ git add --all
$ git commit -m "Initial commit"
$ git push origin master

AWS CodePipeline Setup

We are now ready to create our pipeline! Open the CodePipeline service in the AWS Console and click to create a new pipeline.

In the first step, select the “codepipeline-chefzero-webapp” IAM role and “codepipeline-artifacts” S3 bucket.

In the source step, select AWS CodeCommit.

Choose AWS CodeBuild for the build stage.

Lastly, select AWS CodeDeploy for the deploy stage.

Review your choices in the next page and then create the pipeline. Your pipeline will automatically start.

AWS CodePipeline

Each stage of the pipeline will be executed starting with the retrieval of the cookbook code from the CodeCommit repository. The next stage is the build stage where the cookbook code is packaged. From CodePipeline you can click into each stage to view more details and if we do this for the build stage we can see the build logs.

Perhaps the most interesting stage is the deploy stage. When it starts you can click into it and will see a view like this.

From this page we can see exactly what is happening with our deployment including which instances traffic is being directed to.

The page will update as the deployment progresses.

When the deployment is complete you can test the web application by retrieving the DNS record of the application load balancer that was created by Terraform.


Continuous Deployment

Now let’s give our pipeline a proper test! Suppose the product team have asked that the background image of the web application be updated and deployed. With one commit we can fulfill this request.

$ cd ~/codepipeline-demo/chef
$ sed -i "s,div id=\"blue\",div id=\"green\",g" files/default/index.jsp
$ git commit -a -m "Updating background image"
$ git push origin master

That’s it! AWS CodePipeline will now take over and automatically deploy this change. If you make requests to the web application during deployment you will see the change being rolled out, with both the old and new image being returned until finally only the new image being displayed.


To remove the resources that were created in this demo, first you should manually delete any autoscaling groups that CodePipeline provisioned. They will have names starting with “CodeDeploy_codepipeline-chefzero-webapp”.

Next you can destroy the Terraform infrastructure.

$ cd ~/codepipeline-demo/chef
$ terraform destroy

Now you can manually delete the pipeline from the CodePipeline service.

As part of the creation of the pipeline, a CloudWatch events rule was created. This would have been deleted when you deleted the pipeline but the associated IAM role and policy would not have been removed. The role will have a name like “cwe-role-eu-west-1-codepipeline-chefzero-webapp”. You should remove this role and the attached policy.


This is a very simple example showing some of the capabilities of CodePipeline. It, along with CodeBuild and CodeDeploy, have other features including:

  • use S3 as the source instead of CodeCommit;
  • deploy to a percentage of instances at a time instead of all at once;
  • deploy to Amazon Elastic Container Service (ECS) and Lambda;
  • add a manual approval stage to the pipeline so a human must interact with it before a deployment can occur;
  • integration with Jenkins;

I hope this article and the associated code helps you get started with these very powerful services!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s