Skip to content

Tracking Memory Allocation in Node.js

The Importance of Measuring Memory Allocation in Node.js Applications

As Node.js developers evolve, it becomes increasingly important to understand how the runtime works internally to avoid problems in production, as well as to optimise the application so that it only uses the necessary resources. Gaining this understanding can result in substantial cost savings. Some of the most commonly asked questions include:

  • How much memory does this function allocate?
  • Which function allocates the most memory in the heap?

Memory is the root cause of the majority of bottlenecks in production applications. Collecting and observing metrics surrounding memory usage in production applications is a key challenge.

This article explains how to measure memory allocation in Node.js applications and why it’s so important.

How does Node.js allocate memory?

Before any memory analysis, it’s important to understand how Node.js manages memory allocation.

When an application starts, it triggers the following workflow:

  1. V8 allocates a heap
  2. The application fills the heap
  3. V8 garbage collection cleans up the heap
  4. V8 increases the heap size if it’s still full

How is the Memory Heap divided?

The Memory Heap is divided into two major spaces:

  1. Old space: where older objects are stored. Usually, objects are moved here after surviving in a new space for some time. The old space can be controlled by the flag --max-old-space-size
  2. New space: most objects are allocated here. It’s small and designed to be cleaned frequently.

Note: The heap is divided into several spaces, but in this article, we'll focus on just two of them.

The new space is divided into:

  • From space: the object that survived a Garbage Collection cycle
  • To space: objects freshly allocated

While the allocation in the new space is very cheap, the new space is also fairly small in size (between 1 and 8MB). For this reason, it’s a good idea to clear the objects as soon as possible to free up memory for new objects and avoid them being allocated in the old space. Let’s explain this in a bit more detail. Take a look at the example below:

The black circles are freshly allocated objects. However, as mentioned above, the new space is small, so what happens when the space is full?

The GC (garbage collection) is triggered and performs a quick scan into the to space to check whether there are dead objects (free objects).

Let’s assume that a portion of the above graph loses its reference, meaning it can be freed:

The GC completed its cycle under new space ( to space) and found two blocks to be free (blank cycles). However, it also found that there’s a group that’s still reachable (has survived the GC cycle) and should be moved to the from space.

After the GC cycle, the to space has more available memory to be allocated and the objects that have survived in the first cycle were moved to the from space.

Now, the to space becomes full again and GC needs to be triggered. Let’s assume that the object that now lives in the from space loses part of its reference, meaning, that part needs to be collected.

In the to space, there are two objects that have survived their first GC cycle. The other ones can be cleaned/freed.

So, what happens to the other part (blank circle in from space) that has survived the second GC cycle?

It’s copied to old space ! When an object is moved from the new space to the old space , it’s fully copied, which is an expensive operation.

Even though it's an expensive operation, the GC is fast enough to do it unnoticeably. However, it’s important to mention that, when an object from old space is accessed through to space, it loses the cache locality of your CPU and it might affect performance because the application is not using CPU caches.

Node.js provides an API to control the GC from the JavaScript side. It also provides a way to trace what’s happening in GC. These flags are --expose-gc and --trace-gc respectively.

Collecting memory from the Old Space

As mentioned above, the V8 Garbage Collector is complex; this article aims to show the major features from a broader perspective. For more details, I strongly suggest reading the V8 documentation.

In the last section, we discussed how V8 memory is divided and how it handles the new space allocation. In this section, we’re going to discuss old space memory management.

The GC handles some threads behind the scenes and one of them is to mark blocks of memory to be freed. This means that, in any Node.js application, there’s a thread scanning the old space looking for a memory address that isn’t reachable, which also means that it can be freed. This approach is also called mark-and-sweep .

In this situation, the thread will only mark those blocks to be freed in another thread.

Then, after the Mark Phase , the GC calls the Sweep Phase:

In this phase, the marked blocks are finally freed.

The final step is the Compact Phase:

This phase is expensive because V8 needs to move objects around. It could also be called defragmentation. For this reason, collecting from old space is slow.

V8 prefers allocating more heap rather than collecting from old space . So, just because memory usage is never decreasing, this doesn't necessarily mean there’s a memory leak.

Memory allocation can be harmful

In Node.js (or specifically V8), it’s more efficient to frequently allocate small short-lived objects, rather than modifying large long-lived objects. This is because of the GC, as explained in the last section.

Nowadays, the V8 garbage collection is really efficient. Nevertheless, when an application is allocating and freeing big blocks of memory it may lead to a block in the event loop.

JS Engines put a lot of effort into making GC efficient. In prior Node.js versions, the GC was prone to generate bottlenecks in the application due to misuse by the user.

There are several ways to monitor GC activity and the increase of the ELD (event loop delay) is just one of the available approaches.

GC Traces

There’s a lot to learn about how GC works. The trace for garbage collection is available through the --trace-gc flag.

Plain Text
node --trace-gc app.js

Output example:

Plain Text
[19278:0x5408db0]  44 ms: Scavenge 2.3 (3.0) -> 1.9 (4.0) MB, 1.2 / 0.0 ms  (average mu = 1.000, current mu = 1.000) allocation failure

[23521:0x10268b000]  120 ms: Mark-sweep 100.7 (122.7) -> 100.6 (122.7) MB, 0.15 / 0.0 ms  (average mu = 0.132, current mu = 0.137) deserialize GC in old space requested

Node.js exposes performance hooks (since v8.5.0) to trace the GC.

const { PerformanceObserver } = require('perf_hooks');

// Create a performance observer
const obs = new PerformanceObserver((list) => {
  const entry = list.getEntries()[0]
  The entry would be an instance of PerformanceEntry containing
  metrics of garbage collection.
  For example:
  PerformanceEntry {
    name: 'gc',
    entryType: 'gc',
    startTime: 2820.567669,
    duration: 1.315709,
    kind: 1

// Subscribe notifications of GCs
obs.observe({ entryTypes: ['gc'] });

// Stop subscription

However, in most cases, it’s more effective to monitor the Event Loop metric. For these cases, Clinic Doctor is a powerful tool.

A quick introduction to Clinic Doctor

Doctor helps diagnose performance issues in your application and guides you towards more specialised tools to look deeper into your specific issues. Symptoms such as low CPU usage, blocking garbage collection , frequent event loop delay, or a chaotic number of active handles may indicate a number of potential problems.

Installing Clinic.js

Clinic.js is available through npm:

Plain Text
npm install -g clinic
clinic doctor -- node index.js

You can combine it with autocannon to provide load tests. Further information can be found on the Clinic.js website .

Observing Memory Allocation

It’s of paramount importance to observe how much memory the application is consuming. For instance, basic applications can use the package climem to monitor memory usage, but the usage of system monitors like htop is perfectly fine.

The eBPF probes could also be used if, for some reason, a raw observation is needed.

In the next sections, we’ll examine some ways to track the memory allocation/usage in the application.

Using process.memoryUsage() Node.js API

Node.js provides an API to analyse memory usage.

Plain Text
// Prints:
// {
//  rss: 4935680,
//  heapTotal: 1826816,
//  heapUsed: 650472,
//  external: 49879,
//  arrayBuffers: 9386
// }
  • rss: Resident Set Size - the amount of memory allocated in the V8 context
  • heapTotal: Total size of the allocated heap
  • heapUsed: Memory used during the execution of the process
  • External: Memory usage of C++ objects bound to JavaScript objects managed by V8
  • arrayBuffers: Memory allocated for all the Node.js Buffer instances

However, once a memory issue is identified, these tools wouldn’t help find the root cause. In these cases, specialised tools are needed.

Node.js Memory Snapshot

Memory Snapshot is a powerful tool to monitor memory allocation in a low-level visualisation.


To create a snapshot, all the work in the main thread stops. Depending on the heap contents, it could even take more than a minute.

Creating a heap snapshot requires memory about twice the size of the heap at the time the snapshot is created. This results in the risk of terminating the process by OOM (out-of-memory).

Get the Heap Snapshot

There are several ways to take a snapshot of a process:

  1. Via inspector protocol
  2. Via command line flag --heapsnapshot-signalsignal
  3. Via writeHeapSnapshot API
  4. Chrome Dev Tools (Inspector protocol behind the scenes)

In this section, we are going to use the Chrome Dev Tools approach.

  • Run the node with --inspect flag
  • Open inspector

Analysing the Snapshot

Viewing the snapshot as a summary will show pretty interesting information:

  • Constructor
  • Distance
  • Shallow Size
  • Retained Size

You can find a more granular explanation in the Chrome documentation - check it out here .

Two of the most confusing metrics for new users are Shallow Size and Retained Size. Shallow Size is the size of memory that’s held by the object itself (usually, only arrays and strings can have a significant shallow size). Retained Size is the size of memory that’s freed once the object itself is deleted along with its dependent objects. It stores all the sizes of the object, plus its dependents.

Through a basic analysis, it can be hard to figure out where the problem is. This challenge is magnified in large codebases. In situations where you need to understand memory allocation by functions, two powerful options are the Chrome Dev Tools - Allocation Sampling (in the memory tab) and Clinic.js HeapProfiler tool.

Introducing Clinic Heap Profiler

The Heap Profiler is part of the Clinic.js suite of tools. Its objective is to uncover memory allocation by functions with Flamegraphs .

Verify that heapprofiler is functioning properly:

Plain Text
clinic heapprofiler –help

Once we’ve installed clinic and verified that the clinic heapprofiler is functioning we can start with a simple example.

For all the following examples, we are going to profile the tracking-memory-allocation source code.

Run the 01-initial application with clinic heapprofiler

Plain Text
clinic heapprofiler --autocannon [ / ] -- node index.js

This command starts the application index.js  and starts a load test using autocannon at the root route (/). The autocannon default runs 10 connections for 10 seconds.

When the load is done the process is killed automatically and a Flamegraph is generated like the one below:

The flamegraph is an aggregated visualisation of memory allocated over time. Each block represents the amount of memory allocated by a function. The wider the block, the more memory was allocated.

Looking at the FlameGraph generated, we can see that name is the function that allocates more memory during the execution of the process. Pretty interesting! The code from the name function doesn’t look good.

Fixing memory allocation in name

This is the name function that showed as a wider block in our last FlameGraph:

function name () {
  let result = namesGenerator()
  if (names[result]) {
    result += names[result]++
  names[result] = 1
  return result

The objective of the function is to always return a unique name. Let’s assume that namesGenerator will always return 'rafael' Calling it three times will return:

> "rafael"
> "rafael1"
> "rafael2"
> { rafael: 3, rafael1: 1, rafael2: 1}

There’s the issue! For every call of name a new property is added to the names object, changing the function to hold only a count reference should fix it gracefully:

function name () {
  let result = namesGenerator()

  names[result] = names[result] ? 
    names[result] + 1 :

  return result + names[result]

The new flamegraph should seem different after that change:

It looks more reasonable for our small application.

  • No wider blocks
  • Most of the memory allocation is from dependencies and Node.js internal

You can also use Clinic Doctor to monitor the memory consumption during the process execution. It will consume way less memory than in the previous version.

Understanding memory allocation is essential

Memory is often a source of confusion for engineers. However, once they understand how V8 manages its memory, the information provided by Node.js tools is crucial.

It’s strongly recommended to understand how a Node.js application manages its memory. The information shown in “ How does Node.js allocate memory ” is a must-read for every Node.js developer. That section gives the knowledge needed to scale up applications with high memory consumption.

There are several tools in the Node.js ecosystem that give visibility to memory management. For those who want to see how your application behaves over a high load, climem is a great tool. However, once high or suspicious memory consumption is identified it’s essential to reach for more robust tools.

The Clinic.js package provides a wonderful suite of tools that allows anyone to understand how their application behaves. Please, make sure to try it and give it a star in its repository .

Insight, imagination and expertly engineered solutions to accelerate and sustain progress.