Concurrent Programming with Node.js Multithreading using
Node.js is known to be a single-threaded runtime environment, meaning that a program’s code is executed line after line and there can’t be two lines of a program running at the same time.
This may seem like a disadvantage when compared to multithreaded runtimes such as .NET’s CRL or the JVM, which can leverage the multi-core nature of modern CPUs.
In practice, this comes with the great advantage of largely simplifying the programming model. As an example, strong assumptions can be made that no concurrent access can ever happen to a shared data structure.
Node.js has great resources to explain how this works
Node.js Multithreading support
Although Node.js makes the assumption of running user code in a single thread, it also exposes multithreading APIs, the core of which lies in the
worker_threads programming model is fairly simple and consists of instantiating a
Worker by providing the path to the file to be executed.
A somewhat convoluted example consists of a single program that behaves differently depending on whether it’s executed from the main or a worker thread:
The main thread, identified by the boolean export
isMainThread, logs to the console and then starts a new worker using the same file. The worker will then execute the code in the
else branch and print its own
The result of running this program will be similar to:
You can follow along by cloning this repository, where you will find all the code shown in this blog post.
A program that runs on multiple threads, where each thread does something different and is isolated from the others, is not a very useful program. More often, they will need to collaborate and communicate to carry out a common task.
For example, in order to avoid blocking the event loop, you may want to offload a CPU-intensive operation to a worker by sending it some data, letting it run in the background, then getting the result back for further processing in the main thread.
This requires some level of synchronization between the main and worker threads because they are running concurrently.
A typical way to communicate across multiple threads is by accessing shared data.
As an example, let’s compute the Nth prime number, a fairly expensive operation for high values of N, and offload this work to a number of background workers. Then, we want to determine which one managed to begin the computation first.
For reference, this is the code we use to calculate the Nth prime number:
A naive approach
The following code is the entry point of our program, running on the main thread.
It spawns a certain number of workers (the specific number is unimportant), supplying them with the index of the prime we want to compute and a result variable to store the result. It then waits for all workers to finish and prints the value of the prime number and the threadId of the worker that computed it.
The reason why we decided to use an array to store the result will be clearer later on.
The following code is the worker’s code, which introduces an arbitrary short delay, checks if no other worker has yet computed the result and, in that case, does the computation and sets the result.
Running this code leads to a result similar to this:
Each worker is unaware of other workers computing the prime number, so it stores its own
threadId and prime number in the result. The change is not propagated back to the main thread, which keeps seeing the initial value instead of the value we expect.
The reason for this is that data sent from one thread to another is copied using the structured clone algorithm, thereby causing each thread to see its own copy and preventing changes from being propagated across threads.
The problem with the previous example can be addressed by using a data structure that, instead of being copied, is shared across multiple threads.
SharedArrayBuffer. Its usage can be unintuitive at first, but it basically represents an array of raw, arbitrary data. The data can then be manipulated using views, such as
Int32Array, which behave like plain arrays.
The relevant part of the main thread code of the previous example becomes:
The changes are minimal: instead of using a plain array we create a
SharedArrayBuffer big enough to contain two 32-bit integers, then we initialize it to -1 using a
Int32Array view on it. The worker code remains unchanged.
If we execute this code, most of the time we’ll get something that looks like a correct result:
Unfortunately, at some point we’ll come across a bug, which manifests itself in a way similar to this:
This is called a race condition. Two workers believed they were the first ones to run the computation when only one can be.
This is the nature of asynchronous, concurrent code: access to shared data must be thread-safe and atomic, to guarantee correctness.
In this case, the problem lies in these two lines of code:
Two threads were able to enter the if statement and both changed the result. The last one which did is the “winner”, but we ran the computation twice unnecessarily.
Achieving thread safety
In order to fix the race condition we rely on the atomic operations provided by the
The operation we’re doing consists of:
- Reading a value
- Checking if the value is equal to -1
- If so, setting it to a different value
We will use the
Atomics.compareExchange method, which does exactly the same in a thread-safe way.
Our main thread code is unchanged, and the worker code changes as such:
Atomics.compareExchange does this:
- Read the value at index 0 of the
- If the value is equal to -1, set it to
- Return the original value before the change, if any
This is all happening in an atomic fashion, thereby preventing the race condition of the previous example from happening.
Using the right primitives for thread-safe programming opens up many opportunities for multithreaded programs, which distribute workloads across multiple workers.
The code shown in this blog post is available in this repository.
In the next blog post of this Node.js multithreading series, we will see how to apply these concepts to a more interesting scenario.