This blog post contains helpful tips and insights to make working with Node.js a smooth, safe and enjoyable experience, whether you’re just starting out with Node.js or have been using it for a while.

We’ll be covering four topics:

  1. Debugging
  2. The ecosystem
  3. Throwing
  4. Control flow


Node.js has a built-in debugger. For those who have worked with lower-level languages, like C/C++, the built-in Node debugger may feel familiar. It resonates with the likes of GDB or LLDB:

$ node debug myApp.js

Node.js developers tend to come from various backgrounds. For those comfortable with debuggers like GDB, the built-in Node debugger may feel like home. However, others come to Node.js from front end JavaScript development or perhaps from languages with powerful IDE support. These developers may prefer a more visual debugging experience.

Enter Iron Node:

$ npm -g i iron-node
$ iron-node index.js

Iron Node is built on top of the very awesome Electron. Electron provides a shared Node and Chrome environment. This marriage allows us to use Chrome Devtools on a Node codebase. Iron Node simply connects the dots to package that functionality.

There is one problem with this approach. Electron builds typically lag behind the latest Node version, and Iron Node typically lags behind the latest Electron version. This means that if we’re using cutting edge Node or v8 features Iron Node will choke on them.

For instance, Node 4+ will run this perfectly:

require('http').createServer((req, res) => {

Currently Iron Node will throw a syntax error. This particular problem shouldn’t last long since Iron Node is in the process of being updated. In these scenarios, Babel can be used as a workaround for syntax dissonance between versions:

$ npm i -g babel babel-preset-es2015
$ babel --presets $(npm get prefix)/lib/node_modules/babel-preset-es2015 index.js > index.debug.js
$ iron-node index.debug.js

It’s beyond our scope for this article, but as a side note both Iron Node can also can perform heap and CPU profiling – however bear in mind that such profiling includes the Iron Node environment alongside our app.

An alternative is Node Inspector, this debugging environment hooks into Nodes Remote Debugging API so the profiling and debugging occur only in the context of the node process. It also means it doesn’t suffer from the version disparity as Iron Node does. On the other hand Node Inspector is noticeably slower and feels less integrated to work with since it operates over TCP and WebSockets instead of in memory.

Debug logs

Whilst console.log is a familiar friend in times of need, littering default output with highly specific log statements eventually increases the difficulty of debugging other areas. As a result, console.log debugging oscillates between the need for detail and the need for silence.

The debug module allows us to instrument our code with logs that are hidden by default:

$ npm install debug --save

To create a debug logger we simply call the debug module and supply a namespace:

var debug = require('debug')('myMod');

module.exports = function myMod(some, args) {
  debug('myMod called');

The debug logger is enabled via setting the DEBUG environment variable:

$ node -e "require('./myMod.js')()" 
$ DEBUG=myMod node -e "require('./myMod.js')()"
myMod myMod called +0ms

Or on Windows:

> set DEBUG=myMod
> node -e "require('./myMod.js')()"
myMod myMod called +0ms
> set DEBUG=

We used the -e flag to execute a code snippet to require our module and invoke the exported function. When the DEBUG module matches the myMod namespace the debug module returns a logger function instead of a noop function.

If our code is more complex, it may be appropriate to provide fine-grained control of debug logs.

Say we have library module called myLib that’s local the myMod module:

var debug = require('debug');
var info = debug('myMod:myLib');
var verbose = debug('verbose:myMod:myLib');
var silly = debug('silly:myMod:myLib');

module.exports = function myLib(some, args) {
  info('myLib called');
  verbose('myLib arguments' + JSON.stringify(arguments));
  if (silly.enabled) {
    silly('myLib stack ' + Error().stack.replace(/Error\n\s+at Error \(native\)\n\s+/, ''));

We’ve sub-namespaced myLib to myMod, this convention means we can target debug logs and can either get all debug logs for a particular module using DEBUG=myMod:* where the asterisk (*) is interpreted as a wildcard or we target a specific library by referencing it specifically (DEBUG=myMod:myLib). Multiple debug targets can be tied together with a comma:DEBUG=myMod:myLib, myMod:myOtherLib.

Creating a stack is a potentially intensive operation. The debug module supplies the enabled property on each logger for these sorts of scenarios. Note how we check the enabled property of the silly log function.

For verbose and silly log levels, we’ve prefixed the namespace. The loglevel:mod:lib pattern allows for a great deal of flexibility:

$ DEBUG=verbose:myMod:myLib node -e "require('./myLib')(1, 2)" #verbose my lib logs
$ DEBUG=verbose:myMod:* node -e "require('./myLib')(1, 2)" # verbose logs for all myMod libs
$ DEBUG=verbose:* node -e "require('./myLib')(1, 2)" # all verbose logs for an app
$ DEBUG=*myLib node -e "require('./myLib')(1, 2)" # all loglevels for myLib

Remember on Windows we use set. For instance, just taking the last line:

> set DEBUG=*myMod
> node -e "require('./myLib')(1, 2)" 
> set DEBUG=

In addition to the primary benefit of easier future maintenance for ourselves, our organisations, and the Node community at large; integrating debug logs can be an invaluable code quality and self improvement exercise.

Much like writing tests and documentation, instrumenting code with debug logs help clarify our own perception of the logic often resulting in bug removal and improved approach.

Core debug logs

Node core has similar debug log support. The NODE_DEBUG environment variable specifically turns logs for the following core functionality:

  • fs
  • http
  • net
  • tls
  • module
  • timers

For instance, say we have a Node server and we want to see what’s happening at a low level regarding HTTP and TCP operations. We can find out by setting the NODE_DEBUG variable to http,net:

$ NODE_DEBUG=http,net node server.js

Beautifying stack traces

When the run time crashes, when we throw or use console.trace a stack trace is output to the terminal. Typically, stack traces can be hard on the eyes. cute-stack styles stack traces, making them easier for humans to parse.

npm install -g cute-stack
cute -t table -e "ohoh"

The cute executable wraps the node binary – all the same arguments can be passed plus additional ones for cute. For instance the -t is used to specify the stack trace output type. We set the type to table but there’s also pretty (the default), jsonjson-pretty and tree.

We can also require cute-stack directly at the top of a modules or applications entry point:


The ecosystem

The great thing about npm is that anyone can publish to npm. All you have to do is register an account and npm publish. This also helps to explain the massive growth of the ecosystem. However, the scary thing about npm is also that anyone can publish to npm. This laissez faire approach to ecosystem management has been fundamental to rapid growth. The trade off is the increased burden of discovery and evaluation on the module user. For rapid prototyping, there’s no doubt 200,000+ modules at our fingertips is an amazing thing.

But somewhere before going live, someone has to check that these modules are production worthy.

In this section we’re going to explore some useful module selection heuristics and then we’ll look at some ways to vet modules.

Dependent modules, users

The amount of modules relying on a module is a powerful metric. It carries a similar weight to word of mouth. The module pages of the npm website detail the dependents at the bottom of the page, however the npm command line tool doesn’t provide a way to retrieve these dependents.

We can use the npm-dependents tool to retrieve dependents via the command line

$ npm i -g npm-dependents
$ npm-dependents express
express has 6927 modules depending on it

The npm info command goes provide a list of npm users who are depending on a module in one or more of their own published modules. This will be a different (lower) number but it is a correlating indicator:

$ npm info express users | wc -l

Popularity contest

Just because something is popular it doesn’t mean it’s right for every case. Sometimes a module is less popular because it’s extremely niche, but it may just be the very thing that’s needed. Additionally, just because a module or framework is popular, doesn’t mean we should make assumptions about any sane behaviour that a framework or module should have. For example, the wildly popular express framework does not set secure defaults, it favours rapid development over production security, leaving that as an exercise for the user. If you’re interested in more information on server hardening, see the helmet package, the Hapi framework or get in touch with nearForm.

Shiny websites

Another smoke screen when evaluating modules can be a super-awesome-shiny website. If a module under evaluation is actually a framework maintained either by a large company or active community, then a polished site is neither a red flag nor a green light. However, for small independent modules curated by one to three developers, a file on Github is sufficient. In fact it’s reassuring. If a small team has produced an amazing website to accompany a recently released module, then extra scrutiny should be applied to code quality. Shiny websites don’t correlate to good code.

Who wrote it?

Whilst it’s important to explore the ecosystem, we naturally begin to recognize prominent module authors. This is a healthy thing, learn who to trust and use their modules when you can.

Manual review

Going through the source code of a third-party module is often a great educational exercise. Reading all the source code of all the modules and their sub-dependencies can be a daunting challenge. But quickly scanning the source for red lights can be a good way to catch potential issues. One thing to look out for is the use of eval. Whether eval is called directly or indirectly via the likes of new Functionevalling user input on the server side is really very dangerous. It also has performance implications.

However, understanding the context is vital. For instance, some template engines use eval (for example dustjade, Angular template logic…). If we’re using a template engine we have to know that we’re trusting the engine to thoroughly clean user input, and understand the flow of data into and out of the eval.

Other things to look out for (also context-dependent) could be checking that the dependency makes use of streams instead of expecting to buffer all data first. In these cases, the question must be asked: What’s the largest possible amount of data that could be passed through this module. Buffering then synchronously processing data is a recipe for disaster with Node.js.


The auditjs module cross references top level dependencies with the OSS Index which holds a list of packages (and Node versions) with known vulnerabilities.

Simply install globally and run it in a project’s root:

$ npm -g install auditjs
$ cd my-app
$ audit


Like many languages, JavaScript has a throw keyword. However in JavaScript throwing is a destructive action. It’s the sledgehammer of error propagation.

As a rule of thumb, any throw that occurs after the initialization (of a server, or a database connection, any environment that handles asynchronous operations) is probably the wrong thing to do.

So when we’re about to write throw, the first question should be, could this happen after an app or environment has initialized.

It’s better to delegate the authority of process crashing to the top level of the application this is why we recommend not throwing inside modules unless they’re intended for use during initialization (and even then, consider allowing the application to decide anyway).

Throwing some time after initialization when the process is supposedly stable is less than ideal – particularly if the application is written as a monolith (all code running in one process, rather than separate micro-services).

If we follow the practice of only throwing when we explicitly expect to exit the process, then the amount of throws in our application should be minimal.

As a rule avoid throwing as much as possible after initialization.

In particular throwing errors that could have been handled without compromising stability, or without making the purpose of the process redundant, is a bad idea.

Never throw in a callback unless the callback is made at the top level of an application and the intent is to crash the process. Callbacks in modules should defer to the caller to decide how to handle an error.


As in many other languages, a throw may not crash a process, if it’s wrapped in try/catch. However there are certain serious caveats we need to be aware of.

Firstly, use of try-catch causes runtime de-optimizations resulting in unnecessarily slow execution – particularly for hot code paths. This means if we adopt throwing and try/catch all over a codebase it could significantly affect performance.

Additionally, JavaScript is a dynamic asynchronous language but try-catch is a synchronous construct originally designed for compiled languages.

When it comes to JavaScript, the try-catch is something of a square peg in a round hole, which can lead to developer confusion.

For instance, consider the following:

try { 
  setImmediate(function () { throw Error('Oh noes!'); });
} catch (e) {
  console.log('Gotcha!', e);

The error won’t be caught at all. The setImmediate function is an asynchronous operation so the throw happens after the try-catch block has been executed.

The point here is: never throw in an asynchronous context.

Creating functions that expect the try-catch pattern can lead to poor code quality. Beyond documentation or reading source code, we have no way to know whether a function could throw an error. Typically functions that may throw can work fine most of the time, which makes it very easy to omit the try-catch – until that one time when the process crashes. Or else the opposite can occur, a super defensive style of programming where we try-catch every single call (believe me, it’s not sustainable).

If a try-catch absolutely must be used (e.g. JSON.parse), isolating the try-catch in a separate function and calling it from the original function confines de-optimizations to the purpose built function.

function parse(json) {
  var output = {};
  try { output.value = JSON.parse(json); } catch (e) { output.error = e; }
  return output;

function doThings(json) {
  //doing things...
  json = parse(json); //doThings wont be affected by try-catch in parse
  if (json.error) { /*handle err*/ }
  json = json.value;
  //doing things...

Throwing alternatives

For asynchronous operations, errors can be propagated through callbacks, we’ll explore callbacks in the next section.

For error handling in synchronous functions we have a couple of options

  • Return null on error
  • Return an Error object on error
  • Return an object that holds value and error properties
  • Use a Promise abstraction
  • Use a callback abstraction

Returning null on error is discouraged, because both error and value states occupy the same space – so the error may accidentally be treated like a value. If the null response isn’t handled and there is an attempt to access a property on thenull value, the process will throw.

Beyond this, if type coercion is being applied to an un-handled null return value elsewhere in the application we may get unexpected booleans, or worse a NaN.

Returning an Error object is slightly better than returning null, but also occupies the same space for errors and values, but won’t crash the process when other code attempts to access properties. This scenario could be worse, since subtle bugs that are difficult to trace could creep in elsewhere in the application.

Returning an error-value object was demonstrated in the parse function previously, but here’s an additional example with a fictional tokenizing function:

function tokenize(template) {
  if (notParseable(template)) {
    return {
      error: Error('couldn\'t parse it guvna')
  var tokenObject = parse(template);
  return {value: tokenObject};

In this case, if we forget to handle the error, we’ll at some point try to interact with the value, at which point the app will likely throw. The problem should be easier to identify because we’ll see an object with an error property instead of an object with a value property, and then be able to trace that object to the function that returned it.

There’s definitely an argument for using asynchronous abstractions, particularly with promises (as long as you don’t mind the overhead of using promises).

A promisified tokenize function might look like this:

function tokenize(template) {
  return new Promise(function (resolve, reject) {
    if (notParseable(template)) {
      return reject(Error('couldn\'t parse it guvna'))

tokenize('invalid }}template')

Whilst promises are typically for asynchronous operations, they conceptually don’t have to be. Additionally, they always behave an in asynchronous manner whether the internal mechanics are asynchronous or synchronous. This consistent behaviour causes consumers to always treat Promises in the same way, allowing for trivial switching between the two operation types without disruption.

We’ll be discussing callbacks shortly, since they’re heavily associated with asynchronous operations we wouldn’t recommend creating synchronous functions that use a callback paradigm (with the exception of iterator callbacks such as those used in functional programming, e.g. forEachmap, etc.)

Control flow

In JavaScript, the fundamental unit of asynchronous operations is the callback. A callback is a function that is supplied so that it can be invoked when an asynchronous operation is complete.

Callbacks are an implementation of continuation passing style programming (CPS). A continuation is essentially an operational building block, it’s a flow-control primitive.

The prevailing and recommended type of callback is the errback, or error first callback:

var fs = require('fs');
fs.readFile('not-a-file', function (err, data) {
  if (err) {
    return console.error('Oops', err);
  console.log(data + '');

This approach was popularized by Node, and continues to be used in many modules throughout the ecosystem.

Expecting an error as the first parameter helps to induce positive developer habits, one of the hardest yet most practical and cost-effective of all design goals.

Placing the error parameter between the developer and the result is a constant reminder to the developer to handle and propagate errors. If the error parameter was last it could easily be ignored.

The basic asynchronous unit (the callback) can be wrapped in higher level control flow patterns to increase code organization and associate semantic meaning with asynchronous logic.

Event emitters

Event emitters are part of the Node.js core. Unlike an errback or a promise, event emitters tend to be for communicating multiple values according to a namespace.

This means they don’t use errbacks; instead, errors are communicated by calling a function associated with an “error” namespace:

ee.on('error', function (err) { /* deal it it */ });

Event emitters tend to facilitate a pub-sub communications architecture. This can be the right approach in some scenarios but cumbersome in others.


Streams apply a basic functional programming paradigm to asynchronous operations.

Specifically streams are about transferring (and transforming) large data sets in multiple pieces asynchronously.

var fs = require('fs');
var source = fs.createReadStream('/dev/random');
var transform = require('base64-stream').encode();
var destination = fs.createWriteStream('./ran');


Here we created a read stream, which pipes through a transform stream to base64 the binary data from /dev/random, and pipe the resulting content into a write stream pointing to file ./ran.

This approach stops our process from filling up with memory, it only takes a piece at a time and pipes it through to its final destination.

Since streams are built on event emitters, error handling is much the same:

  .on('error', console.error)
  .on('error', console.error)
  .on('error', console.error);

Errors do not propagate through a pipeline. Notice how we listen for errors all streams.

Another caveat is if one streams in the pipeline closes or errors, any cleanup on other streams in the pipeline is manual (this can lead to memory leaks).

The pump module solves both the error propagation and clean up scenarios:

$ npm i --save pump
var fs = require('fs');
var source = fs.createReadStream('/dev/random');
var transform = require('base64-stream').encode();
var destination = fs.createWriteStream('./ran');

pump(source, transform, destination, function (err) {
  if (err) return console.error('Error in pipeline', err);
  console.log('Pipeline finished')

This type of control flow is perfect for data processing, but can also be applied to object processing (using object streams).


Unlike event emitters and streams, promises are for single values.

querySomeDb({get: 'icecream'})
  .then(function (icecream) {
    console.log('got ma icecream', icecream);
  .catch(function (err) {
    console.error('where is icecream?', err);

Promises allow us to treat logic as an object, we can pass around a value we don’t have yet.

Like streams, they’re also highly composable:

  .then(function (n) {
    //return an async op
    return new Promise(function (resolve, reject) {  
      setTimeout(function () {
        resolve(n + 10);
      }, 100);
  .then(function (n) {
    return n * 10; //returning a value actually returns a promise
  .then(function (n) {
    console.log(n); //150

Since promises are part of the EcmaScript 2015 standard and are implemented in more recent versions of v8 we’ll be seeing a lot more of them.

Control flow frameworks

The basic asynchronous unit (the callback) can be wrapped in higher level control flow patterns to increase code organization and associate semantic meaning with asynchronous logic. One library that has been particularly successful in this area is async.

The async library has a variety of control flow patterns all of which boil down to performing operations in series or in parallel and optionally collating the results.

Performing a series of operations using a pure errback approach can lead to excessive nesting (often referred to as callback hell). This can be mitigated by simply breaking out functions and referencing them by name instead of nesting each of them, however for larger scale projects a control flow framework can make a lot of sense.

Often times we need to perform operations in series when the result of one operation is needed for a following operation. For this case, there’s async.waterfall

    function (cb) { 
      getPerson({query: id}, cb);
    function (person, cb) {
        species: person.preference.species,
        breeds: person.preference.breeds
      }, function (err, pets) {
        cb(err, {pets: pets, person: person});
    function (result, cb) {
        criteria: result.person.profile, 
        pets: result.pets
      }, cb);
    }, function (err, pet) {
      if (err) {
        return console.error(err);
      if (!pet) {
        return console.log('Sorry :( no pet for you');
      console.log('Congrats, you have a ', pet);

The ability to itemize each distinct step in a set of interdependent operations makes the async library a powerful choice. If you’re thinking of checking out async take a look at steed as well – whilst still a work in progress it’s shaping up to be the lodash to asyncs underscore, providing the same API with 50%-100% performance improvements against async. Matteo Collina, the author of Steed, will be talking about Steed and how he obtained massive performance gains at Node.js Interactive 2015.

Choosing a control flow approach

A combination of approaches can be used throughout a codebase to best fit individual scenarios.

Ultimately, understanding the errback and using it as a simple unit of asynchrony is an effective way to write JavaScript.

It’s a core language construct, and the convention is well known. Using errbacks makes it easy for other developers to interact with your APIs.

Using well-known higher level abstractions is fine, but remember that there is a cost to doing so. There should be a strong reason in the larger context for using a control flow library, or promises, or event-emitters (and often there is).


The intent of this blog post was to cover ground on four areas that require particular attention and understanding when working with Node.js. Whether you’re just getting started or have been using Node for a while, I hope there was something in here for you. This post was assembled from the first four articles in a ten part series on NodeCrunch. If you found it helpful, check out the ten tips for coding with Node.js series.

By: David Mark Clements

David Mark Clements is a JavaScript and Node.js specialist based in Northern Ireland. He is nearForm's lead trainer and training content curator. From a very early age he was fascinated with programming. He first learned BASIC on one of the many Ataris he had accumulated by the age of nine. David learned JavaScript at age twelve, moving into Linux administration and PHP as a teenager. Node has become a prominent member of his toolkit due to its versatility, vast ecosystem, and the cognitive ease that comes with full-stack JavaScript. David is author of Node Cookbook (Packt), now in its second edition.