CV Partner blog

Deploying CV Partner

News
Technical
Sales and marketing
Productivity
Design

Deploying CV Partner

The CV Partner application is made up of many parts that are changing all of the time. Dozens of times a day the tech team will make a change to parts of how the application works, without any of our users noticing. How do we do it?

“When you do things right, people won’t be sure you’ve done anything at all.” —Futurama

Amazon Web Services

At the heart of our technical infrastructure is AWS, or Amazon Web Services. If you’re already familiar with AWS feel free to skip to the next section.

AWS is a massive suite of tools businesses and individuals can use to run things on the web. Need a server running 24/7 in professionally managed datacenters? EC2 gives you just that. Need a database to store user data? RDS, DynamoDB and DocumentDB have your back. You only pay for what you use, the pricing is transparent, and setting up each of these things takes less time than reading this article.

CV Partner runs entirely using AWS. But as we grow, and the product grows, we use more and more AWS products and it becomes difficult to manage. Making sure we have a handle on our infrastructure, and can change it with confidence, is important.

Managing Infrastructure with CloudFormation

This section is going to be a whirlwind introduction to what’s possible with CloudFormation. If you already understand CloudFormation, aren’t that interested, or are satisfied with the description “it lets you create infrastructure in AWS from configuration files,” feel free to skip ahead to "How we use CloudFormation in practice".

CloudFormation allows you to describe your infrastructure in configuration files called “templates.” These templates can be turned into a CloudFormation “stack,” which is all of the things in the template but actually running for real in AWS, by uploading them to the CloudFormation dashboard.

An example might help make this more clear.


AWSTemplateFormatVersion: 2010-09-09
Resources:
  MyDB:
    Type: 'AWS::RDS::DBInstance'
    Properties:
      DBName: MyDatabase
      AllocatedStorage: '5'
      DBInstanceClass: db.t2.small
      Engine: MySQL
      MasterUsername: myName
      MasterUserPassword: myPassword
Outputs:
  JDBCConnectionString:
    Description: JDBC connection string for the database
    Export:
      Name: JDBCConnectionString
    Value: !Sub
      - "jdbc:mysql://${addr}:${port}/MyDatabase"
      - addr: !GetAtt MyDB.Endpoint.Address
        port: !GetAtt MyDB.Endpoint.Port


This sets up a MySQL server with 5 gigabytes of storage space on a small server, and it “outputs” a JDBC connection string that we could then pass in to another CloudFormation stack. These outputs are how related bits of infrastructure tie together.

I save this template to a file, go to the CloudFormation dashboard, and create a new stack. 6 minutes and 32 seconds later, I have a functioning MySQL database waiting for me.


And we can see the JDBC connection string in the outputs.


Now we can fire up an EC2 instance, which is just a server running in an AWS datacenter, that can read from that database.


AWSTemplateFormatVersion: "2010-09-09"  
Resources:  
  WebInstance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.small
      ImageId: ami-0713f98de93617bb4
      UserData:
        Fn::Base64: !Sub
          - | 
             #!/bin/bash
             export DATABASE_URL=${DATABASE_URL}
             echo $DATABASE_URL
          - DATABASE_URL: !ImportValue JDBCConnectionString

This gives us a small server running an “AMI” called ami-0713f98de93617bb4, and it runs a script on start that exports our database URL to contain the JDBC connection string of the database we just brought up, and prints that string out. In reality we would want to put the code for running our web server or something in this script.


Cool, right? If we wanted to increase the capacity of our web server, we could change the InstanceType to be, say, t2.medium, which gives us more CPUs and more RAM. CloudFormation handles bringing up the new server, running our script on it, and cleaning up the old one.

Running your own code with CloudFormation

I glossed over an important detail in the previous section: what was that weird AMI ID?

That specific ID is one provided by Amazon for their “Amazon Linux 2” base image. This is a Linux installation that is kept up to date with security patches, has a bunch of sensible defaults, and has some nice AWS integrations built in to it. But it has no CV Partner code on it, and that is a problem for us, because it means we can’t run our application code with it.

To get CV Partner code into an AMI, we need to build a custom AMI ourselves, upload it to AWS, and somehow get that ID into our CloudFormation templates. Unfortunately, the tools CloudFormation offers us to help with this are limited. It’s possible to “parameterise” bits of your CloudFormation template, for example:


AWSTemplateFormatVersion: "2010-09-09"
Parameters:
  AMI:
    Type: AWS::EC2::Image::Id
 
Resources:  
  WebInstance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.nano
      ImageId: !Ref AMI
      UserData:
        Fn::Base64: !Sub
          - | 
             #!/bin/bash
             export DATABASE_URL=${DATABASE_URL}
             echo $DATABASE_URL
          - DATABASE_URL: !ImportValue JDBCConnectionString

Now the AMI is a parameter to the template, and when we upload it to CloudFormation we are presented with a screen that asks us to supply this parameter.


But manually doing this every time we make a change is cumbersome and error prone. AMI IDs aren’t exactly easy to read, and copy-pasting the wrong value could be disastrous.

How we use CloudFormation in practice

We, like many other companies, have wrapped our use of CloudFormation in our own custom built tool we call “the deployer.” When we set out we had the following goals:

  • Deploying should be automatic, but manually triggered, ideally a single click or command.
  • Rolling back should be just as easy as deploying.
  • Deployments should not cause any user-visible interruptions.
  • Deployment configuration should mostly be a one-time, set-and-forget kind of thing.

Building

Before we can deploy something, we first need to build it. We use Jenkins for this, and for building AMIs we use Packer. Builds are triggered every time a branch on GitHub, where we keep all of our code, is pushed.

The output of our build process, as far as the deployer is concerned, is 2 files: a CloudFormation template, and a file with the parameters to feed in to that template. The parameters are in a simple JSON format.

One of the steps in the build process will build the AMI with Packer and then put the resulting AMI ID into the parameters file. These files are then pushed to an S3 bucket that the deployer has access to.

Deploying

Armed with the CloudFormation template and its parameters, the deployer has all of the information it needs to either create or update a CloudFormation stack. Using the AWS API, it issues the necessary commands to deploy our software.

How does this actually look, though?



This is the interface for the deployer. It has a lot of parts to it.

  • In the top left you can see it’s part of a wider “admin tool,” more on that in a future post.
  • We have a big red “pause” button, which is for situations we want to pause all deployments while we investigate something.
  • On the left is a list of projects. All of these will have their own build process and CloudFormation stacks that can be deployed.
  • In the top right is a “Jenkins Builds” panel, which shows in-progress builds. Handy if you’re waiting for one to finish and don’t want to open up another tab.
  • Near the bottom you can see a “GBL-DEPLOYER” card. This represents a CloudFormation stack (yes, the deployer does indeed deploy itself).

To do a deployment, you would select the “deploy config” using the selection box currently displaying deployer-master-158. All builds are named with their project, branch, and build ID. When you’ve selected the build you want to deploy, all you need to do then is hit the big blue “deploy” button.

An in-progress deployment keeps you up to date on what’s happening, and can be cancelled if necessary.


Environments

You may have noticed the tabs underneath the deploy config. “DEV”, “STG”, “PRD”, “GBL.” These are our “environments” and each one refers to a different AWS account we own to keep data separate. You can read more about that here. When we make a change, we deploy it progressively through our environments to make sure it works before deploying it into production (PRD). This helps us have confidence that our changes aren’t going to break functionality for our users.

Rollbacks

Because each build outputs all of its infrastructure and parameters, rolling back is just a case of deploying a build that’s older than the current one. The deployer doesn’t care about forward or backward, it just replaces one set of infrastructure with another in the most interrupt-free way possible.

While it works for us, this approach does have some gotchas.

  • If your old CloudFormation templates depend on imported values from other stacks that no longer exist, the rollback will fail and the stack will go back to whatever state it was in prior to attempting the rollback.
  • If you roll back far enough it could be that other parts of your system were depending on new features that are now gone. You have to rely on monitoring in other parts of your system to pick up when this happens.

Conclusion

We use a combination of Jenkins, Packer, CloudFormation and some home-grown tools to make dozens of changes a day in a way that’s safe and doesn’t interrupt our users.

It wasn’t mentioned in the post, but the deployer isn’t limited to building AMIs and deploying to EC2 instances. We also make heavy use of Lambdas where possible, as well as having a few CloudFront distributions kicking around. When your unit of deployment is a CloudFormation template, you can deploy almost anything in AWS.

Learn more by contacting CV Partner