Deploying CV Partner
The CV Partner application is made up of many parts that are changing all of the time. Dozens of times a day the tech team will make a change to parts of how the application works, without any of our users noticing. How do we do it?
“When you do things right, people won’t be sure you’ve done anything at all.” —Futurama
At the heart of our technical infrastructure is AWS, or Amazon Web Services. If you’re already familiar with AWS feel free to skip to the next section.
AWS is a massive suite of tools businesses and individuals can use to run things on the web. Need a server running 24/7 in professionally managed datacenters? EC2 gives you just that. Need a database to store user data? RDS, DynamoDB and DocumentDB have your back. You only pay for what you use, the pricing is transparent, and setting up each of these things takes less time than reading this article.
CV Partner runs entirely using AWS. But as we grow, and the product grows, we use more and more AWS products and it becomes difficult to manage. Making sure we have a handle on our infrastructure, and can change it with confidence, is important.
This section is going to be a whirlwind introduction to what’s possible with CloudFormation. If you already understand CloudFormation, aren’t that interested, or are satisfied with the description “it lets you create infrastructure in AWS from configuration files,” feel free to skip ahead to "How we use CloudFormation in practice".
CloudFormation allows you to describe your infrastructure in configuration files called “templates.” These templates can be turned into a CloudFormation “stack,” which is all of the things in the template but actually running for real in AWS, by uploading them to the CloudFormation dashboard.
An example might help make this more clear.
This sets up a MySQL server with 5 gigabytes of storage space on a small server, and it “outputs” a JDBC connection string that we could then pass in to another CloudFormation stack. These outputs are how related bits of infrastructure tie together.
I save this template to a file, go to the CloudFormation dashboard, and create a new stack. 6 minutes and 32 seconds later, I have a functioning MySQL database waiting for me.
And we can see the JDBC connection string in the outputs.
Now we can fire up an EC2 instance, which is just a server running in an AWS datacenter, that can read from that database.
This gives us a small server running an “AMI” called ami-0713f98de93617bb4, and it runs a script on start that exports our database URL to contain the JDBC connection string of the database we just brought up, and prints that string out. In reality we would want to put the code for running our web server or something in this script.
Cool, right? If we wanted to increase the capacity of our web server, we could change the InstanceType to be, say, t2.medium, which gives us more CPUs and more RAM. CloudFormation handles bringing up the new server, running our script on it, and cleaning up the old one.
I glossed over an important detail in the previous section: what was that weird AMI ID?
That specific ID is one provided by Amazon for their “Amazon Linux 2” base image. This is a Linux installation that is kept up to date with security patches, has a bunch of sensible defaults, and has some nice AWS integrations built in to it. But it has no CV Partner code on it, and that is a problem for us, because it means we can’t run our application code with it.
To get CV Partner code into an AMI, we need to build a custom AMI ourselves, upload it to AWS, and somehow get that ID into our CloudFormation templates. Unfortunately, the tools CloudFormation offers us to help with this are limited. It’s possible to “parameterise” bits of your CloudFormation template, for example:
Now the AMI is a parameter to the template, and when we upload it to CloudFormation we are presented with a screen that asks us to supply this parameter.
But manually doing this every time we make a change is cumbersome and error prone. AMI IDs aren’t exactly easy to read, and copy-pasting the wrong value could be disastrous.
We, like many other companies, have wrapped our use of CloudFormation in our own custom built tool we call “the deployer.” When we set out we had the following goals:
Before we can deploy something, we first need to build it. We use Jenkins for this, and for building AMIs we use Packer. Builds are triggered every time a branch on GitHub, where we keep all of our code, is pushed.
The output of our build process, as far as the deployer is concerned, is 2 files: a CloudFormation template, and a file with the parameters to feed in to that template. The parameters are in a simple JSON format.
One of the steps in the build process will build the AMI with Packer and then put the resulting AMI ID into the parameters file. These files are then pushed to an S3 bucket that the deployer has access to.
Armed with the CloudFormation template and its parameters, the deployer has all of the information it needs to either create or update a CloudFormation stack. Using the AWS API, it issues the necessary commands to deploy our software.
How does this actually look, though?
This is the interface for the deployer. It has a lot of parts to it.
To do a deployment, you would select the “deploy config” using the selection box currently displaying deployer-master-158. All builds are named with their project, branch, and build ID. When you’ve selected the build you want to deploy, all you need to do then is hit the big blue “deploy” button.
You may have noticed the tabs underneath the deploy config. “DEV”, “STG”, “PRD”, “GBL.” These are our “environments” and each one refers to a different AWS account we own to keep data separate. You can read more about that here. When we make a change, we deploy it progressively through our environments to make sure it works before deploying it into production (PRD). This helps us have confidence that our changes aren’t going to break functionality for our users.
Because each build outputs all of its infrastructure and parameters, rolling back is just a case of deploying a build that’s older than the current one. The deployer doesn’t care about forward or backward, it just replaces one set of infrastructure with another in the most interrupt-free way possible.
While it works for us, this approach does have some gotchas.
We use a combination of Jenkins, Packer, CloudFormation and some home-grown tools to make dozens of changes a day in a way that’s safe and doesn’t interrupt our users.
It wasn’t mentioned in the post, but the deployer isn’t limited to building AMIs and deploying to EC2 instances. We also make heavy use of Lambdas where possible, as well as having a few CloudFront distributions kicking around. When your unit of deployment is a CloudFormation template, you can deploy almost anything in AWS.