The cost of logging
By Matteo Collina

At nearForm, we like to scratch our own itch, and we build tools based on the needs of our customers and our own needs as consultants. When David Mark Clements and myself were in London conducting a performance training and consulting engagement with Net-A-Porter, we noticed that the routes which were logging more were the slowest ones. Moreover, disabling logging allowed far greater performance.

We started a quest to assess the cost of logging.

As a format, JSON is easy to process, and at nearForm we favour it as a logging format, so fast JSON logging had to be the goal.

As in any performance consultancy, we establish a baseline considering the simplest express application (we used node v4 throughout this post):

'use strict'

var app = require('express')()
var http = require('http')
var server = http.createServer(app)

app.get('/', function (req, res) {
  res.send('hello world')
})

server.listen(3000) 

In order to measure this, we will use autocannon, an HTTP/1.1 benchmarking tool inspired by wrk and built in Node.js:

$ autocannon -c 100 -d 5 http://localhost:3000
Running 5s test @ http://localhost:3000
100 connections with 1 pipelining factor

Stat         Avg     Stdev    Max
Latency (ms) 13.57   4.65     117
Req/Sec      7087.34 834.74   8039
Bytes/Sec    1.49 MB 175.1 kB 1.7 MB

430k requests in 5s, 8.93 MB read 

Let’s see what happens when we add logging (in all the following example, we will be redirecting the output from stdout to /dev/null):

'use strict'

var app = require('express')()
var http = require('http')
var server = http.createServer(app)

app.use(require('express-bunyan-logger')())

app.get('/', function (req, res) {
  res.send('hello world')
})

server.listen(3000) 

We would expect this to have similar throughput to the first one. However we got some surprising results:

$ autocannon -c 100 -d 5 http://localhost:3000
Running 5s test @ http://localhost:3000
100 connections with 1 pipelining factor

Stat         Avg       Stdev    Max
Latency (ms) 58.27     16.02    243
Req/Sec      1682.17   203.94   1878
Bytes/Sec    354.99 kB 42.74 kB 409.6 kB

10k requests in 5s, 2.12 MB read 

Running a server using bunyan for logging will reduce throughput by almost 80%. Winston yields slightly better results:

'use strict'

var app = require('express')()
var http = require('http')
var winston = require('winston')
var winstonExpress = require('express-winston')
var server = http.createServer(app)

app.use(winstonExpress.logger({
  transports: [
    new winston.transports.Console({
      json: true
    })
  ]
}))

app.get('/', function (req, res) {
  res.send('hello world')
})

server.listen(3000)

$ autocannon -c 100 -d 5 http://localhost:3000
Running 5s test @ http://localhost:3000
100 connections with 1 pipelining factor

Stat         Avg       Stdev    Max
Latency (ms) 28.81     8.5      149
Req/Sec      3400      425.02   3703
Bytes/Sec    711.34 kB 88.54 kB 786.43 kB

20k requests in 5s, 4.28 MB read 

Running a server using Winston (version 2) for logging will slow you down by at least 50%.

As part of our consultancy, Dave and myself were asked for a fast alternative, and there wasn’t one. Therefore we set ourselves to the task of writing the fastest possible Node.js logger we could manage. We wanted to build something that could easily replace Winston, Bunyan (or even Bole).

Introducing Pino

Our logger is Pino, named after pine in Italian, because there is a pine in front of my house. Bunyan was a giant lumberjack, and Pino is (99%) API-compliant with Bunyan and Bole.

Pino is the fastest logger for Node.js (full benchmarks found here and here). While you can use Pino directly, you will likely use it with a Web framework, such as express or hapi.

Let’s consider the following example using express-pino-logger:

'use strict'

var app = require('express')()
var http = require('http')
var server = http.createServer(app)

app.use(require('express-pino-logger')({
  extreme: true
}))

app.get('/', function (req, res) {
  res.send('hello world')
})

server.listen(3000) 

We can run the same benchmark as before:

$ autocannon -c 100 -d 5 http://localhost:3000
Running 5s test @ http://localhost:3000
100 connections with 1 pipelining factor

Stat         Avg     Stdev    Max
Latency (ms) 20.16   2.73     91
Req/Sec      4819.34 89.14    4919
Bytes/Sec    1.02 MB 25.03 kB 1.05 MB

290k requests in 5s, 6.07 MB read 

Using express-pino-logger increase the throughput of express-winston by almost 40%.

Anatomy of Pino

Anatomy of Pino

Pino is a fast logger built on several principles:

1) It has the fewest possible features; there are no transports, it just logs to a binary stream (a file or stdout), and no file rotation.

2) It only logs in JSON.

3) It needs to be as fast as possible.

4) It should support child loggers, to allow transactions to have their own metadata.

5) Drop-in compatible API with bunyan, bole, winston wherever possible.

Instantiating Pino is easy:

'use strict'

var pino = require('pino')()

pino.info('hello world')
pino.error('this is at error level')
pino.info('the answer is %d', 42)
pino.info({ obj: 42 }, 'hello world')
pino.info({ obj: 42, b: 2 }, 'hello world')
pino.info({ obj: { aa: 'bbb' } }, 'another')
setImmediate(function () {
  pino.info('after setImmediate')
})
pino.error(new Error('an error'))

var child = pino.child({ a: 'property' })
child.info('hello child!')

var childsChild = child.child({ another: 'property' })
childsChild.info('hello baby..') 

The Pino family

Pino has a set of companion tools, each of which have been meticulously implemented to have a minimal impact on performance:

express-pino-logger: express integration

koa-pino-logger: koa integration

hapi-pino: Hapi integration

restify-pino-logger: restify integration

pino-http: barebone logger for node http module, basis of express, koa and restify integrations

pino-socket: forwards pino logs over a TCP or UDP socket

Conclusion

Pino is the fastest logger in town (see the benchmarks), and it uses a full set of performance optimizations to achieve this goal, from avoiding JSON.stringify, to rolling our own printf style formatting module and flattening strings. These techniques reduce the overhead of logging by 40%. In extreme mode, we also delay flushing to disk, to reduce it even more, by 60%. We have adopted Pino in some projects, and it has increased the throughput (req/s) of those applications by a factor of 20-100%. Stay tuned for more blog posts on the techniques that Pino uses to be so fast, as each of those has its own story.

Subscribe to our monthly newsletter!
join the discussion