Staatus: Exploring the Slack API

Back in April, Slack released a cool new feature that allows you to set a status message on your account. Some of the examples they provide are:

  • In a meeting
  • Commuting
  • Out sick
  • Vacationing
  • Working remotely

"This is awesome!," I thought. "Now I can do what did on AIM in 1997!"

Slack did sprinkle this status in various places in the UI, but it's still a pain to go searching for it, especially if you haven't chatted with a person in a while. Plus, there doesn't seem to be an easy way to see a compiled status for a group (e.g. @engineering-team).

Enter: Staatus

I built Staatus on a PHP framework called Silex. It's a microframework based on Symfony that provides the guts for building simple single-file apps. Here's an example:

require_once __DIR__.'/../vendor/autoload.php';

$app = new Silex\Application();

$app->get('/hello/{name}', function($name) use($app) {
    return 'Hello '.$app->escape($name);
});

$app->run();

Staatus allows you to see a compiled status of an individual, a particular team, or everyone in a channel. The code is over on our Github.

I have to hand it to the devs at Slack. The API was super-easy to work with. I made use of these REST commands:

With that, I was able to display both status emojis/messages, but also a blue diamond if the person is online:

The Code

Coding this up was very straightforward. I'd say the only quirk I ran into is that the calls take too long, and Slack does not like that. If your plugin doesn't return within a few seconds, you get an error message. My plugin is hosted on Heroku, so between the lag of Heroku waking up instances and the time it takes to run all the API calls (which can amount to many, especially when Staatus is run against a team name), I needed to figure out a way to return a delayed response. Normally, in a single-threaded language like PHP, that's not easy. Most frameworks follow a very "serial" manner of generating a response:

  1. A request comes in (possibly with parameters).
  2. A response is generated.
  3. The response is returned and the request dies.

So if step 2 was taking too long, how was I going to work around Slack's request time limits? The answer comes in Silex's $app->finish() method. It is documented as:

A finish application middleware allows you to execute tasks after the Response has been sent to the client (like sending emails or logging)

Here's how I used it:

$process = function(Request $request) use($app) {

// first, validate token
 $token = $request->get('token');
 if (empty($token) || $token !== VERIFY_TOKEN) {
 $app->abort(403, "Invalid token.");
 }

$response = [
 'response_type' => 'ephemeral',
 'text' => 'Gathering staatus...'];

// just return; the rest of the processing will happen in finish()

return $app->json($response);
};

$app->get('/', $process); // for testing
$app->post('/', $process);

$app->finish(function (Request $request, Response $response) use($app) {
...
}

As you can see, my $process method returns immediately and I do the rest of the processing in a finish() method. This allows me to take as long as I'd like (give or take) to return a response back to Slack using a one-time response URL they provide. (Technically, you can't take as long as you'd like to respond. Slack has a hard limit of "up to 5 times within 30 minutes"):

$response = [
 'response_type' => 'ephemeral',
 'text' => $text];

$client2 = new Client(['base_uri' => $responseUrl]);
 $response2 = $client2->post('', [
 'json' => $response,
 'verify' => false
 ]);

And that's all, folks. So, go ahead and download Staatus, deploy it on Heroku, and you'll be partying like it's 1997 all over again. BOOYAH!


Traackr :hearts: Instagram

As we continue to support the influencer marketing programs of our clients, we have found it imperative to consume and dig as much as we can out of influencer content created on Instagram. Adding to the consumption of post captions and metrics for engagement reports, Traackr also gives users the ability to harvest the photo tags on influencer posts and make them searchable !!! :)  

Ready? Check it out.
Let’s look at the Traackr profile of the most decorated olympian in history:

Michael Phelps' Traackr influencer profile.

Now, let’s find a post that has photo tags on it.   Oh look...here’s a good one (they seem to be having a lot of fun eh?):

Michael Phelps' instagram post viewed from his Traackr influencer profile.

Now let’s search for one of those tags. I like @casamigos too… read my mind :wink:wink:  Voilà!

Searching photo tags through Traackr.

But … I thought keyword queries only matched the actual content of the post. All I see is ‘Boys will be boys!!! #meldman’. How did you do it?”  

Well… here’s an oversimplified explanation of how it works.

Making photo tags searchable at Traackr.

So, at step (1) we get the post from Instagram’s public API. At step (2), we store it in our search engine and at step (3) we query for “@casamigos”.  And yea...you guessed it! The real magic happens at step (2).

When we’re indexing content, we append the photo tags to a secret field that keyword queries are executed against. But what we surface in the UI is a field that contains the original post “Boys will be boys!!! #meldman”.

Since you’re such a curious cat, here’s an excerpt of what the record looks like in our search engine:

{
  "_type": "post",
  "_source": {
    "postUrl": "http:\/\/instagram.com\/p\/BTIKhMygox-",
    "rootDomain": "instagram",
    "author": "m_phelps00",  
    "postContent": "Boys will be boys!!! #meldman @randegerber @makenagbc @jjd7007 @stuttsydlc @casamigos @romanstutts",
    "postDisplay": "Boys will be boys!!! #meldman"
  }
}

Hint: See the difference between postContent and postDisplay?

So why do we do this?

We want to do more than just track Instagram content to compute the metrics that allow us to analyze an influencer’s reach and resonance. We believe that with Instagram’s ever growing outreach and business impact, the more value we derive from its content, the more meaningful will be the insights we provide to organizations that invest in the practice of influencer marketing.

As it pertains to this particular feature, our customers are able to get more insights on brand mentions by using our platform to match mentions on photo tags and get further analytics and reports on them :)


Git Driven Release Management

Like many engineering teams, here at Traackr we like to streamline our workflow to spend as much time as possible building cool features. We also like to release early and often. And our product team likes being able to answer pesky questions like "has feature X been deployed?" or "there was a regression with feature Y, what release was that included in?". At times, these requirements can be a give and take. Using an issue tracker helps answer questions about when things were released, but navigating an application like Jira is a context switch. Plus, it's easy to click the wrong buttons. Releasing often means these context switches and mistakes happen frequently. Can we do better?

Some of our teams release more often than others.
Some of our teams release more often than others.

We think so, which led us to look at what we like about our tooling. One great thing about using Jira and Bitbucket is the way they seamlessly links commits to issues by including a ticket ID in your commit message. Because of this, we've standardized on a "<Ticket ID> <Message>" commit message format. We enforce this standard using a commit hook. This got us thinking... If we can link Jira tickets to commit messages, can we also map our git tags to Jira fix versions? It turns out the answer is yes!

Out of the box integration is nice, but sometimes you need to do it yourself.
Out of the box integration is nice, but sometimes you need to do it yourself.

The prototype

There were two main problems to solve. How do you identify a range of commits associated with a release? How do you interact with Jira to tag the issues with a new fix version? Enter release_fix_versioner.py.

At Traackr, we use a git-flow tagging and branching strategy; this is very convenient because for every release we already have a tag to reference. By providing our script with a new git tag, we can grab all commits between this tag and the previous tag to represent all of the commits to create a new release. With startTag and endTag, git has a handy command "git log --format=%s startTag...endTag" to print a list of commit messages in an easy to parse format. From there, it's a simple matter of parsing the commits to come up with a unique set of issue IDs. In order to support a variety of commit formats across different teams in our organization, the parsing is done using a regular expression with named groups to identify the key and message: "(?P<key>[\w]*-[\d]*)[ :-](?P<value>.*)".

Interacting with Jira turns out to be simple as well; the interface is extensive and does anything you could ask for. To start with, we decided to add some validation to each ticket prior to dealing with tags. We check things like making sure the work is finished by verifying the state equals "Done". This method is really all there is to it:

After creating two more similar methods to create a new fix version and add that fix version to each ticket, we've got ourselves a prototype:

Next Steps

Now that we have this script, we can start using it to simplify the manual issue management. If it works out, we'll include it as an automated step in our one button deploy job, and we'll never have to worry about Jira being out of date again. This could even be used to generate release notes for our public facing applications.

All the code for this hackweek project is available on github. Give it a try and let us know how it went!


Great Scott! HackWeek 5/1 Recap

Prologue

It’s 6am on a Thursday morning, and I’m realizing the feature I had built last night to demonstrate Vue.js’s double-binding capabilities has regressed due to some code-reorganization I decided to do last-minute.  I thought I had done a good thing, but I ignored my sensibilities to quit while I was ahead and just do that “one last thing”.

So I put on a pot of coffee, and I spent the next few hours not only fixing what I had broken, (in order to make sure my demo doesn’t fall flat on its face in front of my esteemed colleagues), but I also tried to strategize the best way to communicate the week-long and tearful journey towards carving out a path to refactoring an important yet 3-year old feature on our app, in a way that employs the best that VueJS has to offer without throwing the jQuery baby out with the bathwater, all in 360 seconds. Why? because it’s our Hack Week Demo Day!

Hack Weeks @ Traackr

Once a quarter, we take a collective pause from roadmap-focused development, and carve out an entire week for the engineering team to work on projects of their own. What started out as a monthly hack day, we quickly realized that a single day was not enough for any of us to really flesh out let alone implement our ideas. Experimentation needs time and iteration if it would materialize into something solid; we needed more than a day to experiment, fail, retry, and repeat!  So, we decided to have our hacks less frequently but for a much longer period of time, and thus Quarterly Hackathon was born.

The 5/1 Hackweek Recap

This particular Hack Week arrived shortly after we completed a project that the entire Engineering team had been working on for the good part of 6 months. The project was a set of tremendous improvements and new functionality added to the Analytics Suite of the Traackr App, altogether has been named Integrated Influencer Intelligence (or I3).

The temptation after such a sizable release, is to use this unstructured time to make all those small improvements to what we just released vs. spending the time looking at new problems with fresh eyes. This time around, most of the team spent time on fairy new ideas, with some folks improving on existing ones.  By the time the week ended we a whole collection of demos and proof of concepts ranging from: Devops dashboards, to Slack integrations, to enhancements to security, just to name a few.  But like most Hack Weeks at Traackr, it was a great time to shift gears, recharge, and inspire new ideas not just for our team, but for the company as a whole.

So, without further ado, let’s dig into some of the Traackr team’s Hack Week projects, in the gallery below:

Our 5/1 Hackweek Projects Gallery

The gallery below only describes high level info on the projects.  More in-depth articles about our Hack Week is still to come!


Terraform. Useful tricks.

We started using Terraform a few months ago as a way to create consistency and repeatability in the way we manage our infrastructure. Terraform is pretty new and not completely mature. While we are learning how to use it best, we picked up a few useful tricks along the way. I was going to call these best practices, but I don't think we have been practitioners long enough to really know if these are even good practices :-) Hopefully by sharing some of our learnings here, people can pick up a few things or tell us what we are doing wrong.

Use of Make

While Terraform is a single binary and is as easy as terraform plan or terraform apply to run, you need a better strategy to run it with anything bigger than a few machines. Once your infrastructure is bigger than a few machines, you will probably want to break it down in smaller logical chunks. Also, terraform is pretty finicky about what directory it needs to be called from -- in part because of the way it loads files and in part because of where it looks for its state file. It quickly makes sense to use something to wrap the work in tasks. You can probably use ant, gradle or grunt, but these would add more dependencies to your project. So, back to the basics with make. Makefile files to manage tasks (and dependencies between tasks) have been around for a long time, and make is available on pretty much any Unix based platform (and that includes MacOS).
Depending on how you decide to organize your infrastructure, you can create tasks to manage its different parts as simply as:

  • make prod or make qa
  • make app or make api

Break down our infrastructures by services and environments

While in theory it might be nice to think you can manage your entire infrastructure with a single setup and potentially a single command, practice proves different. You will quickly want to logically separate your infrastructure setup. For one, it will reduce the complexity of the number of files you have to manage. It will also make it easier if something goes bad. You don't want a single corruption of the state file to prevent you from managing your entire infrastructure. Or even worse, a bad command taking down part or all of your infrastructure.

So far, we have decided to break down our Terraform project by services (app, API, etc.) and within services by environment (production, staging, etc.). With each service being independent and often managed by different teams, this was an obvious choice. The break down by environment lets us test changes before we apply them to our production infrastructure.

This approach does have some drawbacks. You will find yourself duplicating quite a bit of code. Modules are here to help, but while we use them in each setup, we haven't explored using global ones yet. Also, if you have pieces of your infrastructure shared across all your services, you won't always be able to programmatically reference them.

Save shared state in S3 using versioning

It is pretty well documented that as soon as you have more than 0 persons working with Terraform, you will want to centralize your state file. Since we are hosted at AWS, S3 was the obvious choice. With the use of make you can ensure to always pull the latest state before doing anything:

.PHONY: setup plan apply

setup:
  @echo "Getting state file from S3"
  @terraform remote config -backend=s3 \
    -backend-config="bucket=<bucket-name>" \
    -backend-config="key=<s3-key)" \
    -backend-config="region=<aws-region>"

plan: setup
  @terraform plan
  
apply: setup
  @terraform apply

From time to time, it's possible you will corrupt your state file. And that's no bueno. So, we enabled S3 object versioning on all our Terraform state files. This way, if anything goes wrong, we can always go back to a known stable state.

Delete shared state between runs

This one is probably not a best practice per se. Because we use multiple AWS accounts (for PROD v.s QA), it's not uncommon for us to run Terraform against one AWS account and then different account across multiple targets. On a couple of occasions, this caused corruption of our state file. Now, we delete our local state file with each terraform run: once at the very beginning just in case something was left from a previous (failed) run and at the end once we are done. With the use of make it's easy to pull the latest state file each time (see above).

.PHONE: setup

setup:
  @echo "Clean up local state"
  @rm -rf */**/.terraform
  # Other setup
  
plan: setup
  # Stuff to do
  @rm -rf */**/.terraform

apply: setup
  # Stuff to do
  @rm -rf */**/.terraform

Use ${args} to select target

The terraform command offers a few options. One that's particularly useful when doing development is -target=resource. It limits Terraform operations to that particular resource and its dependencies. When you manage a rather large infrastructure, this is useful during development to limit output to something that's easier to read and debug. We integrate it into our Makefile with:

apply: setup
  @terraform apply ${args}

This allows us to call make with:

> make api args="-target=api_loadbalancer"

 

Know more tricks?

As the saying goes, that's it for now folks! Do you know of any other useful Terraform tricks? Drop us a note.

 

If you like this article or this blog don't forget to like it and share it and follow us at http://devs.traackr.com or https://medium.com/traackr-devs