Serverless Web Applications In AWS

In this article I will demonstrate how to start developing serverless web applications in Amazon Web Services (AWS). A serverless architecture allows developers to focus on their code — the complexity of building and maintaining the infrastructure necessary to run the code is removed from their view.


Starfleet has asked you to create a web application that will allow fleet Captains to view their log entries. There are no servers in the future so you will have to do it with serverless!

We can achieve this with the following design:

Our front-end code will be stored in S3, served via CloudFront. A Route 53 record will be created for our domain. The “back-end” code will be delivered via API Gateway with data storage in Secrets Manager and DynamoDB.


Before we can deploy our design we need to setup a Route 53 zone and a wildcard AWS Certificate Mananger (ACM) certificate. Alternatively you can reuse existing resources. Ensure your certificate is a wildcard as shown below.

Serverless Framework

We will use the serverless framework CLI to build and deploy our API code. We can install it via:

$ npm install -g serverless

Next, clone the code I have developed at and deploy it with serverless, specifying your AWS Account ID as an argument:

$ mkdir serverless
$ cd serverless
$ git clone .
$ sls deploy --accountid 123456789012


The remainder of our code will be in Terraform. You can download the binary for Terraform from and unpack it anywhere on your PATH.

Now you can deploy the Terraform code I have developed at with:

$ mkdir terraform
$ cd terraform
$ git clone .
$ terraform init
$ terraform apply
  The domain of your website.

  Enter a value:
$ Apply complete!


We are now ready to try our web application. Let’s see what happens when we request

Success! But… how did that work? Firstly, we requested “/” and yet we ended up at “/login/” and if we examine our S3 bucket which stores our front-end code we only see:

If our request was for “/login/” how did the “login.html” document get returned? This is where the magic of Lambda@Edge comes in. Have a look at the code at and specifically this block:

def handler(event, context):
  request = event['Records'][0]['cf']['request']
  requestedUri = request['uri']
  if requestedUri == "/":
    request['uri'] = "/home.html"
  elif requestedUri == "/login/":
    request['uri'] = "/login.html"

In the above code we update requests for “/login/” to “/login.html”. So CloudFront (which is where these Lamba@Edge functions run) requests the “login.html” from S3 and returns that to the client. If we examine our CloudWatch logs for this Lambda we can see this happening:

But… if you remember our original request was for “/” not “/login/” so how did our request become “/login/”? Again, its the Lambda@Edge function at work:

  if requestedUri == "/":
    parsedCookies = parseCookies(headers)
    if parsedCookies and 'session-id' in parsedCookies:
      sessionid = parsedCookies['session-id']
      if validSessionId(sessionid):
        return request
    redirectUrl = "https://" + headers['host'][0]['value'] + "/login/"
    response = {
      'status': '302',
      'statusDescription': 'Found',
      'headers': {
        'location': [{
          'key': 'Location',
          'value': redirectUrl
    return response

In this code block, the Lambda is checking the request for the presence of a session id and if none is found it responds with a redirect to the “/login/” page.

Session Management

Let’s login to the application. In we have deployed two passwords to Secrets Manager. Try logging in with either:

Username = picard
Password = makeitso

Username = kirk
Password = KHAAAN!!!

I chose Picard 🙂 and the web application has successfully returned my logs. Now let’s dig into what happened here. Firstly, if we look at the serverless framework code at we can see that after the password is checked against what is stored in Secrets Manager an item is put into the “my-serverless-website-sessions” DynamoDB table.

def setSessionId():
  global SESSION_ID
  SESSION_ID = secrets.token_urlsafe()
    dynamodb = boto3.client('dynamodb', AWS_REGION)
    epoch = epoch = datetime.datetime.utcfromtimestamp(0)
    now = datetime.datetime.utcnow()
    diff = (now - epoch).total_seconds()
    now_seconds = int(diff)
    ttl = now_seconds + 60
    dynamodb.put_item(TableName='my-serverless-website-sessions', Item={'userName':{'S':USERNAME},'session-id':{'S':SESSION_ID},'creationTime':{'N':str(now_seconds)},'ttl':{'N':str(ttl)}})
  except ClientError as err:
    return None

A neat feature of DynamoDB is Time to Live which allows you to set when an item in your table expires. DynamoDB will automatically remove the item after this time (though not immediately). You can view the table in the DynamoDB console to see existing sessions.

The login API returns the session id to the client which is then saved to a cookie and the home (“/”) page is requested. This is when the home API is invoked. In this API, we get the userName from the session id by looking up the “my-serverless-website-sessions” DynamoDB table. If an item exists we then query the “my-serverless-website-logs” DynamoDB table for this user and return the results to the client for display.

As mentioned, the TTL feature of DynamoDB does not instantly remove items after they expire so our code should really verify that a session id is still valid. This logic is not present in the sample code.

Handling Errors

Cloudfront will by default, lookup S3 for whatever the user requests. So if a user requested “/foo” then Cloudfront will try to find a key named “foo”. If not found, a rather ugly error will be displayed to the user. To avoid this, we can enable a feature of CloudFront that will respond with a custom path whenever certain error codes would be returned. This setting is enabled in the code:

resource "aws_cloudfront_distribution" "my-serverless-website" {
  custom_error_response {
    error_code         = "403"
    response_code      = "403"
    response_page_path = "/error.html"

So now when errors occur we get a much prettier output:


If you have deployed my Terraform and serverless code to your environment you can clean it up with the following steps:

$ cd terraform
$ terraform destroy
$ cd ../serverless
$ sls remove --accountid 123456789012

Note: you might get an error when destroying the terraform stack. Lambda@Edge resources can take some time (as much as 24 hours) before they are fully removed. So you should repeat the destroy of terraform a day or two later.


That’s it! I hope this very simple example has demonstrated the capabilities of a serverless architecture.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s