Pipelines are cool, right?...

Showing results for 
Search instead for 
Did you mean: 

Pipelines are cool, right?...

Active Member II
2 0 2,230

Delivery Pipelines; When you think pipeline you usually think "I put something in one end and get something out of the other". True, but it's what happens whilst in the "pipe" thats the exciting bit. You can add as many or few gateways/checkpoints/hurdles as you believe is possible. Whatever you feel is necessary to ensure the quality and dependency of your software. Maybe you have multiple pipelines or just one. There isn't a one fits all pattern, which is great. The more innovative ideas the better.

In this blog I will cover how we created a pipeline for 'Continuous Delivery' involving various steps and technologies/methodologies.

Some of the bits covered are Chef (linting, testing), SPK (AMI Creation), AWS CloudFormation and Selenium Testing. I've added links so that you can read and digest those technologies in your own time. This blog will not cover what they do and the assumption is that you have some basic understand of each of them.

So where does it all start? Typically there has been some form of change to our Chef code; some bug fix or new feature that needs to be tested out in the wilds of the 'net. So our build agent checks out the code and runs some checks before moving onto the next stage. These are the steps taken as part of our "pre-ami build" checks:

  1. Foodcritic: This tool runs 'linting' checks on our Chef cookbooks and reports back any anomalies. Very useful to maintain high and consistent standards across all of our cookbooks and repos.
  2. RuboCop: A static Ruby code analyser. Chef is based on Ruby, so checking for Ruby quality is also equally important. Additionally custom modules/libraries also need to be checked that are 100% Ruby.
  3. Unit Tests: We use RSpec to run a bunch of unit-tests against the recipes we have written. Some people feel this step is overkill. Not me, the more checks the better. What I like about unit-testing chef recipes is the contractual obligation between the test and the recipe. The test expects the recipe to contain certain steps and will fail if those steps are removed. The downside to this is that if new steps are added to the recipe and not tested within the unit-test, the "contract" becomes weaker. But, thats were good old code reviews come into play. Don't be afraid to push back and ask for tests! A Coverage report is also generated as part of this step and published as a HTML artefact.

Great! The checks passed! So now we can move onto building our AMI. For this we use an in-house tool called the SPK (link above). We have pre-defined templates that dictate the Chef properties, attributes and packer configurations needed to build our image. Once the AMI has been created, we take a snapshot and return the AMI ID to our build agent.

This is where it gets interesting....

Within our repo we also have the CloudFormation template which contains a list of mappings of AMI's available in specific regions, amongst other things. Right now what we do is, once the AMI id has been return from AWS, we scrape the SPK log files, grab the ID, parse the template and update the AMI ID. This then triggers a commit on that repo that another build job is watching and waiting for. It will only fire IF a change has been made to the template file, not the Chef code. This part could probably be replaced with a Lambda that returns the latest AMI ID.

The next build steps are actually building the CloudFormation stack on AWS. This is triggered using the AWS command line by passing in the template and any parameters needed. One of the outputs of the template is the publicly accessible URL of the ELB in that stack. Once this is available and all the relevant services have started, we use that URL to run a selection of Selenium tests. If the tests pass, we use the AWS command line again to delete the stack. Once this has returned an expected status the pipeline is complete.

There are lots more opportunities for improvements here; once the stack is up and running, we could run more thorough tests such as load-testing, stress-testing, security testing (the list is endless). We also have a whole bunch of integration tests written using the ServerSpec DSL that we have configured to output the results in a JUnit formatted XML. Perfect for pipelines that can read these file types. And perfect for reporting on build-over-time statistics. These need to be fed into the pipeline.

This pipeline currently gives us the confidence that:

  • The Chef code is sound and well-tested
  • The CloudFormation template is valid and repeatably deliverable.
  • We are able to login to our delivered App and run some automated tests.
  • We can delete the stack rapidly to reduce costs.

Thanks for reading all! Any comments and feedback always welcome!