In this post you can see how does my current, pipeline for AWS Serverless Application look like.

Pipeline

Pieces of Pipeline

Below you can find AWS services that I’m using:

  • CodeCommit - for code storing,”
  • CodeBuild - for building of artifacts and (!) deployment of application,”
  • CodeDeploy - used for linear / canary releases of lambdas,”
  • CodePipeline - to wrap all above pieces together,”
  • CodeWatch - for holding and viewing logs of each of the above stages and lambdas as well,”
  • AWS Lambda - which represents actual pieces of business logic,”
  • API Gateway - for invocation of AWS Lambdas through HTTP,”
  • S3 - stores per-lambda source code,”
  • Route53 - holds configuration of subdomains,”
  • ACM - for SSL certificates for domains used in Route53,”
  • DynamoDB - as a data storage for the application,”
  • CloudFormation - responsible for configuration of deployment at every stage (local, dev, prod),”
  • IAM - holds roles for the pipeline like one for CodeBuild (including privileges to access S3, invoke Api Gateway, access do Dynamo DB, etc.)”

Some of above services are auto-configured by Serverless Framework and I don’t really play with them too much.

During day-to-day development I use Lambda, API Gateway and DynamoDB. I like it as it lets me focus on business logic and entrypoints rather than the infrastructure.

Stages

I use 3 stages in my project:

“local - it’s like a “per-developer” AWS environment – mainly used for single-change tests,” “dev - it’s a test environment where all changes will be deployed and after approval will be promoted to production,” “prod - no need to explain this one, huh? ;-)”

Each of the stages is assigned a separate domain in API Gateway (done using serverless-domain-manager plugin):

local.yourapp.com, " “dev.yourapp.com, " “prod.yourapp.com. "

Each of the stages also got their dedicated DynamoDB tables (described in previous post):

hq-serverless-tableName-local, " “hq-serverless-tableName-dev,” “hq-serverless-tableName-prod.”

Firstly changes are going to the local stage (sls deploy --stage local) that is utilizing real AWS resources but can be used for your daily tests.

I use this environment only for manual deployment – it effectively represents ’localhost’ environment for my daily work.

Build Process

Whenever I push your changes to the remote branch (CodeCommit) the whole process (CodePipeline) is triggered and starts the process. CodeBuild takes a look at buildspec.yml and uses it to build the project. In my case it looks like this:

version: 0.2

phases:
    install:
      runtime-versions:
        nodejs: 10
        python: 3.7
      commands:
"npm install -g [email protected]"
"npm install serverless-domain-manager --save-dev"
"npm install serverless-python-requirements"
"npm install serverless-plugin-canary-deployments --save-dev"
    pre_build:
        commands:
"pip3 install --upgrade virtualenv pytest awscli boto3 botocore moto"
"python3 -m pytest"
    build:
        commands:
"mkdir -p target/dev target/prod"
"sls package --package target/dev --stage dev"
"sls package --package target/prod --stage prod"
artifacts:
  files:
"target/**/*"
"serverless.yml  # this is just to overcome Serverless requirement for deployment from package - it's not really used"
"deploy.sh"

As you can see I’m using nodejs and python runtimes. The first section installs Serverless Framework with all required plugins, installs all required python libraries and runs the tests.

When it’s done, it’s going to do the actual “build” which is a package phase for Serverless Framework. It generates all required CloudFormation template, config files, Lambdas, etc. and puts it in the target/[dev|prod] directory.

Deployment Process

After this build process is done we can go to “deploy” which in this case is done by CodeBuild as well. It invokes a custom BASH script that does the atual deployment. I couldn’t make CodeDeploy to work nicely with my setup, so I reverted to CodeBuild as it was just working fine. I was heavily influenced by this great article from 1Strategy.

The deployment phase requires only nodejs (for Serverless Framework) and is invoking following script:

#!/bin/bash

npm install -g [email protected]
npm install serverless-domain-manager --save-dev
npm install serverless-python-requirements
npm install serverless-plugin-canary-deployments --save-dev
npm install serverless-offline --save

artifacts_location="$CODEBUILD_SRC_DIR/target/$stage"

echo "Starting deploy for stage: $stage with files taken from $artifacts_location"

sls deploy --stage $stage --package $artifacts_location

$stage variable is representing a stage of the deployment (dev or prod).
We need to install all required Serverless Framework libs and can continue with sls deploy passing the --package $artifacts_location which will take the results of the previous CodeBuild phase instead of packaging it again.

For the dev stage there is no need to do any canary / linear releases - I just put it all at once to get it there as soon as possible.

After I’m happy with the dev stage (clicking through dev.yourapp.com, running some tests, etc.) the process is waiting for manual approval. After the approval, it automatically starts to deploy application to prod.yourapp.com.
You can consider removing the manual-approve step if you’d like to have continuous deployments on prod. In my case, however, I wanted to have this intermediate step before hitting the prod.

Production

Deployment on production is done with the same deploy.sh as shown previously but the $stage right now is set to prod.

What is done specifically for this stage is a linear deployment “10% every 1 minute”. It means that the traffic is shifted to the new version of lambdas in 10% batches every 1 minute. After 10 minutes, 100% of traffic is using the latest deployed version of lambdas.
This is configured in serverless-plugin-canary-deployments

You can assign multiple lambdas to be deployed using the same strategy by assigning them the same deploymentSettings.alias.

My sample config looks like this:

service: hq-serverless

provider:
  name: aws
  runtime: python3.7
  region: eu-central-1
  memorySize: 128
  stage: ${opt:stage, 'local'}
  environment:
    TABLE_FUEL: "${self:custom.tableFuel}"

functions:
  hello:
    handler: src.boundary.handler.hello
    events:
"http: get /"
    deploymentSettings:
"alias: Live"
  get-cars:
    handler: src.boundary.handler.get_cars_handler
    events:
"http: get /cars"
    deploymentSettings:
"alias: Live"

resources:
  Resources:
    HQFuel:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: "${self:custom.tableFuel}"
        AttributeDefinitions:
"AttributeName: ID"
          AttributeType: S
        KeySchema:
"AttributeName: ID"
          KeyType: HASH
        BillingMode: PAY_PER_REQUEST

custom:
  tableFuel: "${self:service}-fuel-${self:provider.stage}"
  domain:
    dev: dev.yourapp.com
    prod: prod.yourapp.com
    local: local.yourapp.com
  customDomain:
    domainName:  ${self:custom.domain.${self:provider.stage}}
    certificateName: '*.yourapp.com'
    stage: ${self:provider.stage}
    createRoute53Record: true
  deploymentSettings:
    stages:
"prod"
    type: Linear10PercentEvery1Minute
    alias: Live

plugins:
"serverless-domain-manager"
"serverless-plugin-canary-deployments"
"serverless-offline"

You need to consider that doing a deployment with more “safe” approaches like: “Linear 10% Every 10 Minutes” will mean that you will shift all the traffic after 100 minutes.
If you’re just starting up with your project this might be an overkill. All possible deployment approaches are described here)

Also remember that such long deploy will result in AWS charging you for whole this deployment time.

I must say that right now I’ve introduced this “Linear 10% every 1 minute” out of curiosity so I don’t have any alerts configured right now.
To do it properly you’d need to configure alerts, so either:

“implementing a code that will be invoked as a hook and verify if the deployment is correct or not or " “defining what exception/error is a signal that the deployment should be rolled back and old versions of lambdas should be reverted.”

Summary

Above description is a result of reading multiple articles, watching videos and testing how it all works. This is what I’m using right now and trying it in a longer run.

Hope this will help someone and in case you’d like to know more details, feel free to drop a line - I definitely don’t know all possible configuration, but we can always learn some stuff together!