Automated UI Testing With Amazon CloudWatch Synthetics


In a previous article I investigated the use of Amazon Sagemaker to perform automated UI testing for a web application. The intent was to produce an automated test suite which could detect obvious visual errors. In this article I will demonstrate a more robust technique using Amazon CloudWatch Synthetics. More specifically, I will be using the “visual monitoring” feature of CloudWatch Synthetics.

CloudWatch Synthetics now supports visual monitoring, allowing you to catch visual defects on your web application’s end user experience. The new visual monitoring feature makes it possible to catch visual defects that cannot be scripted.


Let’s first deploy the required infrastructure which will be used for the demonstration. We can provision the required resources using Terraform. The terraform code I have created at can be used as follows:

git clone
cd cloudwatch-synthetics-demo/
terraform init
terraform apply

Be sure to make a note of the alb_dns_name output — we will need this later.


With our infrastructure provisioned we now need to deploy our sample web application. The code I have authored at represents a mock search engine application. We can deploy it with:

git clone codecommit::us-east-1://cloudwatch-synthetics-demo codecommit
git clone
cp -r cloudwatch-synthetics-demo/* codecommit/
cd codecommit/
git add .
git commit -m "v1"
git push origin master

Initial Deployment

From the AWS Management Console you can follow the deployment of our mock web application in the AWS CodePipeline and AWS CodeDeploy services.

CodeDeploy Start
CodeDeploy Complete

We can now view the web application by opening the alb_dns_name output from the terraform step.

Mock Search Engine Web Application


You can use Amazon CloudWatch Synthetics to create canaries, configurable scripts that run on a schedule, to monitor your endpoints and APIs. Canaries offer programmatic access to a headless Google Chrome Browser via Puppeteer or Selenium Webdriver.

From the CloudWatch Synthetics service in the AWS console, we will create a canary to monitor our web application.

Create the canary using the “Visual monitoring” blueprint.

Create Canary Wizard

Provide myapp as the name of the canary, enter the alb_dns_name output from the terraform step as the application endpoint with port 8080 and select “15%” for the visual variance threshold.

Create Canary Endpoint

Let the wizard create a new S3 bucket to store the canary results and select the cloudwatch-synthetics-demo-canary IAM role.

Canary Data Storage

Deploy the canary into the VPC created by terraform — within the two private subnets and select the cloudwatch-synthetics-demo-canary security group.

Canary VPC

After clicking “Create”, allow some time for the canary resources to be created. You should then see your canary starting and completing.

Canary First Run
Canary First Pass

Excluded Areas

In our mock web application, the first search result contains an image with some associated text. Suppose in a real world scenario, this image changes from search-to-search. Let’s demonstrate this by changing the image of our mock application. Using the ECS Exec feature described in we can remote into our Fargate task to manually edit the HTML file. Retrieve the task id from the ECS service in the AWS console to remote in:

aws ecs execute-command  \
    --region us-east-1 \
    --cluster cloudwatch-synthetics-demo \
    --task <task-id-retrieved-from-aws-console> \
    --container main \
    --command "/bin/bash" \

Once connected, switch the image with:

sed -i "s/canary.jpg/canary2.jpg/g" /var/www/html/index.html

If we now run our canary we see an error.

Failed Canary

We do not want our UI testing to alert us if this part of our webpage varies because we know it always will have variance. We can configure our canary to ignore changes to this zone. If you edit your canary from the AWS console and scroll to the “Visual Monitoring” section you will see an “Edit Baseline” button.

Canary Edit – Visual Monitoring

Clicking on this button, we can draw areas in our baseline image that should be excluded from our testing. Let’s do this for the image in our first search result.

Baseline Image Edit

Running the canary again we now get a pass.

Canary Pass With Excluded Areas

Deployment Pipeline

We are now ready to test this canary in our deployment pipeline. Our ideal deployment pipeline will behave as follows:

  1. Code changes to our web application are deployed to an out-of-service (staging) area.
  2. Canary tests are performed against this staging area.
  3. Any failures in the canary tests will result in the cancellation of the deployment.
  4. When no test failures occur, our staging area should be promoted to be in-service and serving our end users.

To meet these requirements we can combine the AfterAllowTestTraffic CodeDeploy hook with our visual monitoring canary. The lambda function deployed as part of our terraform code ( triggers the start of our canary during the AfterAllowTestTraffic phase of our CodeDeploy deployment. If you remember from earlier, we set our application endpoint for the canary to use port 8080 — this is the test listener port which can only be accessed internally and not by our end users. It is also the listener where CodeDeploy routes new versions of our application during a deployment.

Let’s test our solution by introducing an obvious error in our web application’s UI. From our CodeCommit checkout, run the following commands to change the font-size of our search result’s title text from 18px to 36px.

sed -i "s/18px/36px/g" index.html
git commit -a -m "v2"
git push origin master

Like our previous deployment, you can follow along in the AWS console in the CodePipeline and CodeDeploy services. The CodeDeploy service will eventually result in the following:

CodeDeploy Fail

As expected, the AfterAllowTestTraffic phase has thrown an error. We can get further details by checking our canary.

Canary Font Failure

The pipeline has done exactly what we expected. UI errors have been detected and the deployment has been cancelled.

Intentional UI Changes

Should you wish to release intentional UI changes which will create significant variance from your baseline you can temporarily disable the UI testing canary for that release. To do this, simply update the RUN_CANARY environment variable for the deploy hook lambda function before releasing.

Lambda Environment Variables

After the release you can then edit the canary and set the next run as the new baseline.

Set New Baseline


It is important to note that the synthetics api does not return execution ids when canary runs are started. So it is not possible to know with 100% certainty that the canary we trigger was successful or not. In my Python code, I added some sub-optimal sleeps to workaround this but you may or may not be able to rely on this in a production setting. Hopefully, Amazon can address this in time.

Canaries are not limited to visual monitoring. It is worth exploring some of the other features which you could incorporate into a deployment pipeline or even a health check for your production endpoints.

To clean-up the resources created in this article, manually delete lambda layers and functions created by the canary. Empty and delete the cw-syn-* S3 bucket then execute terraform destroy from your checkout of the terraform code. Be aware that it will take some time for this operation to complete so please be patient.

I hope you have found this article useful.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s