Recently we were tasked with investigating performance with the popular web framework hapi. Performance means a lot of different things, but we wanted to see if we could improve the requests per seconds that hapi could perform on real-world use cases. Real-world means not just “juicing” the stats to improve various benchmark scores but actually trying to figure out what kind of bottlenecks appeared when using the framework for the application you’d normally do when writing an application deployed for a wider audience.
If you do not know hapi , it is a popular web framework for Node.js frequently popping up in “Best Frameworks” lists across the web. It is always a fun challenge for us to try to improve code used by thousands of users around the world so we were excited about squeezing more performance out of it.
We do this kind of performance work quite a bit both in our personal open source modules and also for clients. The method usually goes something like this:
More often than not, it’s the same patterns and architectural choices that lower CPU-bound performance in modules so when you’ve fixed a bunch of these in other modules it helps you identify them in subsequent projects as well.
When we were looking at performance at things such as web frameworks we are lucky to have a lot of similar libraries out there in the same space. This makes it easier for us to identify bottlenecks by comparing scores between modules. For example, when serving a simple GET request if other web frameworks such as Fastify or Express are faster than hapi, it is usually a strong signal that there is a bottleneck waiting to be fixed.
It is worth mentioning that tuning your Node.js code for performance can have pretty serious drawbacks to readability and maintainability of your codebase. It’s often not worth doing this except for “hotpaths” in your code, i.e. code that is run *a lot* and where it represents a bottleneck. Even then if you performance-tune and end up with a 1-5% improvement it is usually better to keep a simpler implementation versus a complex slightly faster one. Tuning your code also puts a higher maintenance burden on you, as performance can change based on V8 versions, I/O performance etc.
With all of this in mind, let’s continue. For simple benchmarks, we’re in luck as plenty exists for web frameworks in Node.js already. These benchmarks are usually a bit synthetic since they often test how fast each framework can return a simple static JSON object in a GET request. Obviously a web application does many more things than this but it still gives us an idea on how fast the core of the framework is and gives us an idea on the performance impact of our improvements.
The benchmark we used was the one maintained by the Fastify community. This suite is quite nice as it both provides a simple tool to compare popular frameworks against each other as well as having bare “hello world” implementations for each individual framework. For hapi, it is available here. We can get a simple request per second score out of this by spinning it up locally and using autocannon , a http benchmarking tool.
The last command above will spin up the benchmark hapi server and when it is listening on a port, autocannon will bombard with http requests against the path ‘/’ for 10 seconds. When it is done we’ll get a score out that looks similar to this:
We can use this score as a baseline to track our improvements.
As mentioned we like to approach performance improvements as iteratively as possible. It’s easier to see the impact of changes if we test as often as possible - if we test too many things it’s hard to know what worked vs what didn’t.
For a first pass performance review of a codebase we look for patterns we know from experience can cause problems that have easy fixes. Some examples of this are:
This is a good time to remember the disclaimer above as many of these patterns we look for changes based on V8 and Node.js versions. Always remember to test each improvement in a benchmark and ask yourself if optimisation is really needed.
We also look for overuse of promises as they have been a recurring theme for us in past reviews, but more on that later.
Applying these patterns to the hapi codebase we found some interesting things. Since we’re only interested in optimising hot paths we focused our attention on the request-response lifecycle code. Here we found some easy wins, that is probably surprising if you’re not deeply familiar with Node.js core’s codebase.
On every request, hapi would parse the HTTP request path into a URL object using the new url api in Node.js . Normally this shouldn’t be a problem but given our background as Node.js collaborators, we knew that the URL object currently is backed by native code. This means that expensive JS to native roundtrip on every request. Since a lot of real-world use-cases do not need to access all the subcomponents of the url object we added lazy parsing of the url to avoid the roundtrip . In the same theme hapi also accessed the remote port and remote IP address of each HTTP connection to expose them to the user, using req.connection.remotePort and req.connection.remoteAddress. Both of these are actually getters that also result in a native roundtrip. Again since a lot of use-cases didn’t require these we wrapped these properties in another getter.
Using these methodless classes has an added benefit of being able to easily spot these objects when you do a heap snapshot as they are grouped by class name.
To test these assumptions we did another benchmark run using autocannon with the improvements enabled:
As we can see the impact was quite significant with little complexity added to the codebase, all in all, a decent trade-off so far.
The second phase of our analysis was aimed towards finding places in the source code where dynamic code was being used. By dynamic code here we mean the pattern used by a log of HTTP frameworks where plugins etc, extend the capabilities of the APIs provided.
A pattern often used with plugin APIs is adding a series of functions to an object in a constructor similar to this:
Personally I quite like this pattern, as it often leads to faster code whilst, in my opinion, not sacrificing readability.
For hapi, we suggested this pattern and implemented a prototype which landed in this commit. This patch made the cost of adding plugins basically zero instead of adding an overhead depending on how many plugins the user would use.
For some specific programs such as parsers, using code generation can speed up execution by many magnitudes, so this pattern is always a good idea to keep in mind if it can be applied to your own codebase.
After applying the above-mentioned fixes we measured a significant performance boost. On our next iteration, however, we stumbled into an interesting problem that turned out to be a bit of a head-scratcher. The main simple benchmark server for hapi looks like this:
When introducing a simple setTimeout to be above handler we saw significant performance degradations.
When running the above bench we’d expect the request per second performance to be more or less unchanged, and perhaps the average request latency to increase a tiny bit due to the 1ms delay on every request.
Instead of that, we saw a big drop in performance of around 50%!
Something was off.
We did a lot of profiling to try to detect why this would have such a big impact, and in the end, by enabling the --trace-gc flag we found a strong signal to what could be wrong. The garbage collection pauses were much longer on the version with the 1ms delay compared to unchanged one. Additionally using the Bubbleprof tool from Clinic.js we saw a lot of promises being in memory for each request.
To get a better understanding of what was happening we ran the benchmarks again with Node.js in debug mode and attached dev tools. If you haven’t done this before it’s actually quite easy and a super powerful way to debug your application. Just run node --inspect app.js and open Google Chrome’s dev tools. There you’ll now see the Node.js icon and by clicking on that you’ll have a dev tools session, but for your Node.js program. [caption id="attachment_300008510" align="alignnone" width="853"]
(Heap snapshot of the benchmark with no delay)[/caption]
Since we were debugging memory usage and garbage collection we decided to do a series of heap snapshots during the benchmark that we could then compare against. It showed a very clear result. In the version with 1ms delay, thousands of promises were clogging up the memory compared to the non-delayed one.
With this in mind, we dug through the source code again and noticed that for each request, in the hot path, many async functions were awaited. We know from our experience that although easy to use async functions can create a significant amount of memory pressure so when we see those in a hot path we consider that a strong signal.
[caption id="attachment_300008511" align="alignnone" width="1024"]
(Heap snapshot with 1ms response delay)[/caption]
Notice how there is a ton of Promises in the profile now
Luckily for us, in hapi most of them were only async in rare cases. For example, hapi supports user-defined encoders in addition to JSON etc. To support different use cases these functions were awaiting internally, although the vast majority of users only use synchronous encoding schemes.
By carefully crafting synchronous fast paths in the request lifecycle whenever the async behaviour was optional like in the above use case we saw a significant performance increase in the delayed benchmark, making it comparable to the undelayed one by removing the GC pressure.
We can see the performance fixes reflected on the benchmarks as well. This autocannon profile is from before the async fixes:
All in all, we were able to squeeze out a 30% performance boost in hapi using relatively simple changes.
Try to look for the simple performance patterns first and ask yourself if you really need to make your code faster. It almost always comes at a readability cost. Give Clinic.js a try, it’s usually good for predicting what to look for. For more info on how others are using Clinic.js to improve performance check out our article Clinic.js rises to the top in diagnosing Node.js performance issues .