Git Driven Release Management

Like many engineering teams, here at Traackr we like to streamline our workflow to spend as much time as possible building cool features. We also like to release early and often. And our product team likes being able to answer pesky questions like "has feature X been deployed?" or "there was a regression with feature Y, what release was that included in?". At times, these requirements can be a give and take. Using an issue tracker helps answer questions about when things were released, but navigating an application like Jira is a context switch. Plus, it's easy to click the wrong buttons. Releasing often means these context switches and mistakes happen frequently. Can we do better?

Some of our teams release more often than others.
Some of our teams release more often than others.

We think so, which led us to look at what we like about our tooling. One great thing about using Jira and Bitbucket is the way they seamlessly links commits to issues by including a ticket ID in your commit message. Because of this, we've standardized on a "<Ticket ID> <Message>" commit message format. We enforce this standard using a commit hook. This got us thinking... If we can link Jira tickets to commit messages, can we also map our git tags to Jira fix versions? It turns out the answer is yes!

Out of the box integration is nice, but sometimes you need to do it yourself.
Out of the box integration is nice, but sometimes you need to do it yourself.

The prototype

There were two main problems to solve. How do you identify a range of commits associated with a release? How do you interact with Jira to tag the issues with a new fix version? Enter release_fix_versioner.py.

At Traackr, we use a git-flow tagging and branching strategy; this is very convenient because for every release we already have a tag to reference. By providing our script with a new git tag, we can grab all commits between this tag and the previous tag to represent all of the commits to create a new release. With startTag and endTag, git has a handy command "git log --format=%s startTag...endTag" to print a list of commit messages in an easy to parse format. From there, it's a simple matter of parsing the commits to come up with a unique set of issue IDs. In order to support a variety of commit formats across different teams in our organization, the parsing is done using a regular expression with named groups to identify the key and message: "(?P<key>[\w]*-[\d]*)[ :-](?P<value>.*)".

Interacting with Jira turns out to be simple as well; the interface is extensive and does anything you could ask for. To start with, we decided to add some validation to each ticket prior to dealing with tags. We check things like making sure the work is finished by verifying the state equals "Done". This method is really all there is to it:

After creating two more similar methods to create a new fix version and add that fix version to each ticket, we've got ourselves a prototype:

Next Steps

Now that we have this script, we can start using it to simplify the manual issue management. If it works out, we'll include it as an automated step in our one button deploy job, and we'll never have to worry about Jira being out of date again. This could even be used to generate release notes for our public facing applications.

All the code for this hackweek project is available on github. Give it a try and let us know how it went!


Terraform. Useful tricks.

We started using Terraform a few months ago as a way to create consistency and repeatability in the way we manage our infrastructure. Terraform is pretty new and not completely mature. While we are learning how to use it best, we picked up a few useful tricks along the way. I was going to call these best practices, but I don't think we have been practitioners long enough to really know if these are even good practices :-) Hopefully by sharing some of our learnings here, people can pick up a few things or tell us what we are doing wrong.

Use of Make

While Terraform is a single binary and is as easy as terraform plan or terraform apply to run, you need a better strategy to run it with anything bigger than a few machines. Once your infrastructure is bigger than a few machines, you will probably want to break it down in smaller logical chunks. Also, terraform is pretty finicky about what directory it needs to be called from -- in part because of the way it loads files and in part because of where it looks for its state file. It quickly makes sense to use something to wrap the work in tasks. You can probably use ant, gradle or grunt, but these would add more dependencies to your project. So, back to the basics with make. Makefile files to manage tasks (and dependencies between tasks) have been around for a long time, and make is available on pretty much any Unix based platform (and that includes MacOS).
Depending on how you decide to organize your infrastructure, you can create tasks to manage its different parts as simply as:

  • make prod or make qa
  • make app or make api

Break down our infrastructures by services and environments

While in theory it might be nice to think you can manage your entire infrastructure with a single setup and potentially a single command, practice proves different. You will quickly want to logically separate your infrastructure setup. For one, it will reduce the complexity of the number of files you have to manage. It will also make it easier if something goes bad. You don't want a single corruption of the state file to prevent you from managing your entire infrastructure. Or even worse, a bad command taking down part or all of your infrastructure.

So far, we have decided to break down our Terraform project by services (app, API, etc.) and within services by environment (production, staging, etc.). With each service being independent and often managed by different teams, this was an obvious choice. The break down by environment lets us test changes before we apply them to our production infrastructure.

This approach does have some drawbacks. You will find yourself duplicating quite a bit of code. Modules are here to help, but while we use them in each setup, we haven't explored using global ones yet. Also, if you have pieces of your infrastructure shared across all your services, you won't always be able to programmatically reference them.

Save shared state in S3 using versioning

It is pretty well documented that as soon as you have more than 0 persons working with Terraform, you will want to centralize your state file. Since we are hosted at AWS, S3 was the obvious choice. With the use of make you can ensure to always pull the latest state before doing anything:

.PHONY: setup plan apply

setup:
  @echo "Getting state file from S3"
  @terraform remote config -backend=s3 \
    -backend-config="bucket=<bucket-name>" \
    -backend-config="key=<s3-key)" \
    -backend-config="region=<aws-region>"

plan: setup
  @terraform plan
  
apply: setup
  @terraform apply

From time to time, it's possible you will corrupt your state file. And that's no bueno. So, we enabled S3 object versioning on all our Terraform state files. This way, if anything goes wrong, we can always go back to a known stable state.

Delete shared state between runs

This one is probably not a best practice per se. Because we use multiple AWS accounts (for PROD v.s QA), it's not uncommon for us to run Terraform against one AWS account and then different account across multiple targets. On a couple of occasions, this caused corruption of our state file. Now, we delete our local state file with each terraform run: once at the very beginning just in case something was left from a previous (failed) run and at the end once we are done. With the use of make it's easy to pull the latest state file each time (see above).

.PHONE: setup

setup:
  @echo "Clean up local state"
  @rm -rf */**/.terraform
  # Other setup
  
plan: setup
  # Stuff to do
  @rm -rf */**/.terraform

apply: setup
  # Stuff to do
  @rm -rf */**/.terraform

Use ${args} to select target

The terraform command offers a few options. One that's particularly useful when doing development is -target=resource. It limits Terraform operations to that particular resource and its dependencies. When you manage a rather large infrastructure, this is useful during development to limit output to something that's easier to read and debug. We integrate it into our Makefile with:

apply: setup
  @terraform apply ${args}

This allows us to call make with:

> make api args="-target=api_loadbalancer"

 

Know more tricks?

As the saying goes, that's it for now folks! Do you know of any other useful Terraform tricks? Drop us a note.

 

If you like this article or this blog don't forget to like it and share it and follow us at http://devs.traackr.com or https://medium.com/traackr-devs


Raft Visualization

Raft Visualization


The Why and How of Ansible and Docker

The Why and How of Ansible and Docker


Setting a performance budget - TimKadlec.com

Setting a performance budget - TimKadlec.com

Bup

Bup

High Scalability - High Scalability - How HipChat Stores and Indexes Billions of Messages Using ElasticSearch and Redis

High Scalability - High Scalability - How HipChat Stores and Indexes Billions of Messages Using ElasticSearch and Redis

11 Best Practices for Low Latency Systems

11 Best Practices for Low Latency Systems

Migrating databases with zero downtime

Migrating databases with zero downtime