Get Inside Unbounce

Subscribe

Dependable Dependencies Versus the House of Cards

At Unbounce we love open source technologies. We build our awesome stuff on them and we contribute back as well. Knowing that a typical open source library itself depends on a bunch of other libraries, our applications end up being built on trees of dependencies. This could be a daunting thought: how can we build confidence that this doesn’t turn into a house of cards? I’m going to detail our approach in this short post.

Consider the image below: it’s the dependency tree of our new and upcoming page server application.

Page server dependencies

As you can see, there’s quite a few direct and transitive dependencies involved in this product. A bad apple could spoil it all. So we could be tempted to stay on versions of these dependencies that we know work great and never upgrade. But that would not be wise: we would miss on potential improvements and also fixes for issues we may have not yet encountered, but are lurking in the dark. Actually, for critical dependencies like Netty and Hazelcast, we want to proactively follow the latest releases as much as possible.

TestingTesting is the only pragmatic way to find out that a dependency upgrade hasn’t broken anything. This sounds obvious, everyone tests right? Sure, but we do it systematically and thoroughly after each change in any of our dependencies.

Unsurprisingly, we have a battery of unit tests that exercise the code base. On top of that, we have an extra layer of integration tests that poke at the application as a whole. This first barrage of tests is already enough to catch the low hanging fruits among the potential regressions that an upgrade could have created. But it’s not enough.

lt-jmeterWe also systematically run a load test on a two-instance cluster of page server instances. We’ve built our load test in a way that it exercises the main features of the page server, instead of hitting just one URL. We randomize requests that produce variable outputs in order to ensure that there’s a correlation between requests and responses. Indeed, a server failing catastrophically could well be serving responses for the wrong requests.

For us, it’s essential to run this load test in a cluster because, since we use a distributed cache, it allows detecting potential serialization issues that do not occur in non-clustered mode. In the load test, we randomize which server is hit per round of requests in order to have a mix of local and distributed cache hits.

lt-profileAfter the load test is done, we end up with a round of memory profiling on each node in order to find out any memory leak that could have been introduced by the dependency change. It also allows us to find out if a library has changed its memory usage. Having this knowledge upfront is extremely valuable prior to a production release: picture the memory graphs of your servers going up suddenly without a good reason for it…

So far, this strategy has served us well, as we’ve been able to catch issues, report them and ponder until they were fixed to upgrade to the next release. But what’s your strategy for upgrading dependencies? Any story you’d like to share with us?