Recently we launched a tool called Oskar in order to track happiness in our remote team. It has been a great experiment that was not only a lot of fun for the team but also an interesting challenge on the technical side of things.
In this post, we want to have a look behind the scenes of Oskar, show you the different technologies involved in creating it, and last but not least, explain why we chose each of them.
A minimum viable plan
As a first step we created a simple Mural.ly board in order to get a high-level overview of the project’s different parts. This allowed me to think about the following questions:
- Which components will the app consist of?
- How will they interact with each other?
- What’s the easiest and fastest way to set up a simple prototype?
I decided to write the backend portion in node.js and MongoDB, two technologies that allow for a very lean and speedy prototyping process without too much setup hassle. That’s especially so, since the Slack client library is written in node.js as well.
For the frontend (a lightweight dashboard with a few graphs), I could then use the existing setup, and add express.js to the stack to easily handle routes and HTTP requests, passing it the relevant pieces of information from the MongoDB storage.
How I quickly became a fan of CoffeeScript
Being familiar with languages and frameworks is one thing. But using them in a real project (especially when you haven’t been using them for almost a year) is a completely different story.
The learning curve of asynchronous unit testing
Since we’re planning to release Oskar publicly in the near future, I wanted to make sure we were creating a solid codebase with plenty of unit tests, to make sure the program behaves as expected and can be maintained and refactored without breaking it.
For this purpose I set up lots of unit tests for each of the modules: the database abstraction; how Oskar interacts with Slack; and some helpers to verify user input and time conversion (since it may be used in very different time zones as well).
If you haven’t done any unit testing before, you’ll quickly run into a dilemma: should I test every single method of an object? Doing so means you often spend a lot of time writing tests, start to consider it a huge overhead, and end up giving up. After doing some research on Stack Overflow and Quora I came to the conclusion that it really depends on what you want to test. Ideally you take as a guideline the idea of testing public methods only, but allow for exceptions where needed. Common sense trumps the rule here.
Another task I struggled with at first was testing those asynchronous methods that return values or promises after a short delay.
So I started wrapping my head around sinon.js and began using spys and stubs where required, both of which are great ways of mocking existing code, wrapping existing functions and injecting them as dependencies into your module:
- I used spys when evaluating the results and parameters of a particular function
- And stubs where I had to mock its behaviour and create predictable return values instead.
This turned out to be a huge win, as for each module that I tested, I could eliminate the dependencies that it had on others via mocking, and write tests against one module at a time, in isolation.
Continuous deployment = Continuous enjoyment
In the spirit of clean and maintainable open source code I was also aiming to have a smooth continuous deployment and a solid build process in place before shipping anything to production.
For this purpose I set up a Travis CI account, connected our Github repository and configured it to deploy to Heroku if (and only if) all tests were passing.
It turned out that running the app through Travis CI was a good idea because it revealed some major shortcomings in the code. The tests which were successfully passing in my local environment didn’t pass elsewhere because of the timezone difference on my local machine.
As a consequence, it didn’t properly recognise the day of the week if the timezone difference to GMT was big enough. So team members would be messaged on a Saturday or Sunday although Oskar wasn’t supposed to do that.
Practice what we preach: Iteration
Despite the effort I put into the unit tests and continuous deployment, my goal was never to be hung up on deploying a totally bug-free version of Oskar. And some of our shipmates got seriously upset when it was pinging them at random moments of the day to play number games.
But since we’re seriously into the idea of shipping fast and often, deploying Oskar in his most minimally functioning form was not a bad idea. It allowed us to start collecting feedback at an early stage, where we could tweak and improve the app without too much hassle.
Based on feedback from my team members, I actually realised the ongoing importance of re-writing parts of Oskar’s interactions, and that this was an area we’d want to iterate upon quite a lot. I encapsulated everything that he needed to say inside a function, so that modifying his language would be a breeze.
From idea to execution to open source
As we’re soon going to release Oskar to the public, here’s a short overview of the technologies that you need in order to run it in your remote team.
- An active Slackbot + API token
- A Heroku account
- The MongoLab add-on for Heroku
- Optional: node/npm (if you want to run and test it locally)
- Optional: Travis CI
Over the next few weeks, I’ll be tightening up the codebase and documenting the key components so that he’s ready for a release. Just sign up for our newsletter below, or follow us on Twitter if you want to get notified once Oskar has been open sourced!
And of course, drop us a line if you’d like to talk about commissioning us to build your own Chatbot.