The benefits of Docker containers are well understood however, the challenges in managing the host operating system remain. AWS Fargate solves this problem.
Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers.
By outsourcing the management of the host OS to AWS you do lose some control. You can’t login to the EC2 instance and run `docker exec`! In well-tuned applications, this should not be an issue. Your logs and metrics will be pushed to Amazon CloudWatch or some other service and there should be no need to login to a container. Perhaps for that initial period of adjusting to this new model or for the times when you can debug faster with container access you may wish to have a mechanism to login to your Fargate containers. In this article I will show how you can setup such a means.
To follow this guide you will need Terraform v0.12.23. Later versions may work but the guide was tested with 0.12.23.
You will also need extensive permissions to your AWS account as we will be creating resources across many services including IAM.
To demonstrate access to Fargate containers we will run a Docker container in Amazon Elastic Container Service. We will build the image for this container from its source in AWS CodeCommit via AWS CodeBuild and store the image in Amazon Elastic Container Registry. The image will be Amazon Linux 2 based with an Apache server and SSH access.
SSH ingress to the Fargate containers will be via an EC2 instance designated as our ECS jump host.
Let’s first setup the infrastructure. We begin by setting up some environment variables:
$ export AWS_ACCOUNT_ID=`aws sts get-caller-identity |jq .Account |xargs` $ export AWS_REGION=us-east-1
$ aws s3 mb s3://tf-state-$AWS_REGION-$AWS_ACCOUNT_ID --region $AWS_REGION
We now deploy some foundation resources that we will need to use AWS CodePipeline. CodePipeline will glue our CodeCommit repository and CodeBuild project together to form an end-to-end pipeline.
$ git clone https://gitlab.com/colmmg/terraform/fargate-ssh/infra/codepipeline.git $ sed -i "s,123456789012,$AWS_ACCOUNT_ID,g" backend.tf $ terraform init $ terraform apply
Now we deploy the baseline resources required for our use of ECS. This stack will also generate the EC2 jump host that we will use later. A security group rule will be created to allow SSH ingress to the jump host from your public IP.
$ git clone https://gitlab.com/colmmg/terraform/fargate-ssh/infra/ecs.git $ sed -i "s,123456789012,$AWS_ACCOUNT_ID,g" backend.tf $ echo "my-ip = \"`curl ifconfig.me`\"" >> terraform.tfvars $ ssh-keygen -t rsa -N '' -C "ECS Key" -f ~/.ssh/ecs $ echo "ecs-public-key = \"`cat ~/.ssh/ecs.pub`\"" >> terraform.tfvars
Before continuing you should copy the contents of the private key (~/.ssh/ecs) into the user_data.tpl file at the line “INSERT PRIVATE KEY HERE”.
Now, you can deploy this Terraform stack:
$ terraform init $ terraform apply
We now need to deploy a CodePipeline project to build our Docker image.
$ git clone https://gitlab.com/colmmg/terraform/fargate-ssh/ecr-apache.git $ sed -i "s,123456789012,$AWS_ACCOUNT_ID,g" backend.tf $ terraform init $ terraform apply
When complete, a build will be attempted but it will fail because our CodeCommit repo is empty.
The code that you need to push is at colmmg/docker/fargate-ssh but before we push this code to CodeCommit let me explain the configuration. The Dockerfile is straightforward, we are installing httpd and openssh and exposing ports 80 and 22. Our ENTRYPOINT is modified to run the entrypoint.sh script. In this script we start SSH in the background, we write the value of environment variable `SSH_PUBLIC_KEY` to the `/root/.ssh/authorized_keys` file and lastly we start httpd in the foreground. The `SSH_PUBLIC_KEY` environment variable is set in our ECS task definition which we will setup later.
Lets clone our CodeCommit repo. You may obtain the clone details from the CodeCommit console.
$ git clone codecommit::us-east-1://apache
Grab the code from colmmg/docker/fargate-ssh and copy it into your CodeCommit repo:
$ git clone https://gitlab.com/colmmg/docker/fargate-ssh.git /tmp/docker-fargate-ssh $ cp /tmp/docker-fargate-ssh/* . $ git add . $ git commit -m "Initial commit" $ git push origin master
If you return to the CodePipeline console you will see that our push has triggered a build.
Click on the build details to follow along as your image is built by CodeBuild.
We can now setup our ECS cluster and service that will run our Docker container.
$ git clone https://gitlab.com/colmmg/terraform/fargate-ssh/ecs-apache.git $ sed -i "s,123456789012,$AWS_ACCOUNT_ID,g" backend.tf $ terraform init $ terraform apply
When complete you can navigate to the ECS service and after a short period of time a task will be launched which will be your running container:
EC2 Jump Host
With everything in place we can now test out our solution. From the EC2 service, find the EC2 instance that was launched earlier. It will be named `ecs-jumpbox`. SSH into this instance:
$ ssh -i ~/.ssh/ecs 184.108.40.206
Now try to ssh into your Docker container:
$ ssh email@example.com Last login: Sun Apr 5 16:16:38 2020 from ip-173-51-19-127.ec2.internal -bash-4.2#
Success! If you are wondering how `apache.local` resolves to the container’s IP it is because as part of our deployments we configured ECS Service Discovery.
That’s it! I hope you find this guide useful!