Electron is a native desktop environment that combines the NodeJS runtime with the Chromium browser. This creates a powerful combination of technologies that allows creating cross-platform applications at a fraction of the cost of other methods.
Despite the many benefits Electron is lauded for, performance often isn't one of them. In this post, we'll dive deep into optimizations that can (and should) be made to achieve smooth rendering and a low resource footprint for Electron on all platforms.
We'll discuss how to optimize the performance of booting & rendering Electron app, and various tools to help you debug performance problems.
Before applications can respond to user input, they generally go through an unresponsive period where they are booting up. To provide a snappy user experience, this phase should be reduced to a bare minimum.
Some optimizations are already done when compiling ; for example, resolving
require() calls ahead of time, so fs.readFileSync() does not occur while booting. This means that startup performance whilst developing may not always reflect performance in production.
Booting itself is defined by a few phases:
To achieve high performance while booting, the goal should be to reduce the amount of the work that needs to be done. To quote Dominic Tarr : "Software performance is losing weight not building muscles" .
Probably the most efficient technique to improve boot performance is to wait for the 'DOMContentLoaded' event before executing any non-UI critical code. This event fires after the initial set of code is done executing, and before any timers start resolving.
'DOMContentLoaded' has the drawback that if a listener is attached after the event has already fired, it will never fire. To circumvent this, it's recommended to use the document-ready package. This checks the ready state on the page before attaching the listener and fixes this problem altogether.
Building Electron applications often have the same performance considerations as building websites. More specifically: what works well for websites often also works well for Electron. One of the optimizations that are particularly interesting to improve boot times is prerendering.
After the application has finished booting, it is ready to respond to outside input. This phase is commonly referred to as the main loop. Optimizations during this phase are usually geared towards reducing resource usage and scheduling actions efficiently.
To achieve high runtime performance, it's common to compile browser applications and apply transforms using tools such as browserify . For example, when using template strings to create DOM nodes , you'll want to compile it to static
document.createElement()calls instead of reparsing HTML on every call. Though this is an example, there are plenty of optimizations that can drastically improve front-end performance.
But because these compilation tools are usually built to make Node code work in the Browser, they don't necessarily work well for environments such as Electron that can run code directly. Particularly when dealing with Native Addons (e.g.
C++ ) things can get hairy, and debugging them is never fun.
The solution to this problem is fairly straightforward. By separating Browser code from Node code, the browser code can directly be targeted. The most efficient way of doing this, is to use
require() for Browser code, and
window.require() for Node code:
Note: before arriving at
window.require() we've experimented with various different approaches. Among others there have been attempts to add C++ support to Browserify, creating separate Node / Browser processes in Electron with shared memory, and variations on the two. In hindsight ,it was fun to have tried these approaches, but
window.require() is about as good as it gets.
The Browser's event loop is different from Node's event loop because it is primarily concerned with providing a smooth visual experience for humans. In practice ,this means that it must be able to render 60 frames per second, and every frame (or "tick" in Node-speak) has a budget of
Every frame in the browser is roughly resolved as follows:
Note: calls to
requestIdleCallback() may resolve at a much slower rate (e.g.every 10 seconds) if the window is in the background. From the APIs we mentioned above,
window.requestIdleCallback() is probably the most exciting. It allows prioritization of tasks within a single process! This allows us to deprioritize everything that isn't essential to rendering UI; and allows breaking up CPU intensive tasks into chunks that resolve over multiple frames.
But like with most things, it doesn't come without drawbacks. Because all callbacks are resolved during the idle period, it requires carefully checking the time remaining at the start of each call. If there's not enough time remaining on the tick, the callback should be re-queued onto the next tick. Luckily the on-idle takes care of all that for you:
So far we've talked about what you can do to improve the performance of applications. But up front knowledge is only half the work - catching warnings, and acting on them is equally important. Note: At the time of writing, some of these features we'll be discussing here rely on the beta release Electron 1.7, which uses Chrome 58. The Electron beta can be installed from npm as
Probably the easiest way of catching performance regressions is by setting the log level to
verbose through the new log level dropdown in the console. Once enabled, performance regressions will emit actionable warnings.
The final tool in the toolbox we'll be covering today is the DOM Performance API . This API contains all sorts of information about the Browser's performance, but more importantly, it allows creating measuring the time elapsed between two points, and displays them in the devtool's timeline.
Now before we continue, it's worth mentioning Node's new async_hooks API available in its current form without flags since Node 8.2. Because we're discussing Electron, we get to choose which API we use to create custom performance entries. Although Node's API is potent, the Browser's API is more friendly to use, and has the benefit that it integrates directly with the DevTools.
To create a new Performance Measure on the Performance Timeline, performance.measure(name, firstMark, secondMark) should be called to measure the time spent between two calls to
performance.mark(name) . Once a measure has been created, it can be seen on the Performance Timeline under the"User Timing" dropdown.
A simpler way of creating these marks is by using the nanotiming module:
Sometimes it can be useful to act on PerformanceEntries . For example: you might want to send them back to a server for later inspection, or log them out during debugging to catch performance problems early.
While there are several APIs that allow retrieving Performance Entries, the PerformanceObserver API is by far the most powerful. However, one of the downsides of using it is that it only starts emitting events after the observer has been created. To react to these events, we recommend using the on-performance module. Not only does it retrieve all Performance Entries once it's attached, it also clears them from the Browser's internal timing buffer so new events can keep flowing in without overflowing the buffer.
In this post, we've touched on the different aspects of Electron's performance, how to design code in such a way that it can be optimized, and discussed various APIs that can help with improving performance.
Let us know what you think in the comments below, or drop Nearform or Yosh a line on Twitter. Cheers!
Need Node experts for your next project? Contact us to see how we can help!