Editor's note: This is a cross-post written by Delivery Architect, Richie McColl. Richie has his own blog . Check it out and you'll find more great posts. Some of the links in this article point to Richie’s personal GitHub account.
A great addition to Node core
Node.js v18 introduced an experimental, built-in test runner . This is a great addition to Node core that we're excited about at NearForm. This means we can now write and run tests without the need to include and setup a third party testing framework.
Automated tests allow us to work iteratively with more confidence. Developing through the lens of testing improves how we design our applications.
In this article, I'll be demonstrating how to use the test runner to test a Fastify backend API. First, we'll go through different approaches to writing tests with some examples. We will also explore some of the command line options for running the tests.
“note: we’re using this version because this release included support for different types of test reporters.”
I assume some basic knowledge of Fastify and what problem it is trying to solve. You can find more information about the principles behind Fastify here .
If you have some Fastify experience, but would like a more in-depth tutorial, I recommend checking out the Fastify workshop . This is a good one to work through at your own pace. Most engineers who join NearForm go through this workshop during onboarding.
If you check out the repo, we have two main files: ( index.js and server.js ).
The first one, the index file, is responsible for building the Fastify instance. This is also where we would typically register any plugins, decorators or routes.
The second file is responsible for starting the server and encapsulates the server startup logic.
API testing patterns
There are a couple of approaches we can take when writing tests. The first pattern we'll look at is request injection. This is what would be considered as a typical unit test.
This allows us to send fake HTTP request objects to our server. In this example below, we're ‘injecting’ a request into the app .
The key thing about this pattern is that it doesn't use a socket connection. That means we can run our tests against an inactive server. In other words, server.listen is never called in these tests.
This inject behaviour comes from a library called light-my-request . You can find some documentation on that here .
There are also a few test runner specific things to note. We're importing the test module from node:test , which is the main interface for writing tests. Also, we're using the assert module as our assertion library. Everyone has opinions on assertion libraries, but we'll use assert for the sake of simplicity.
The async function here receives the test context as an argument. We can then use that to do things such as:
Call test lifecycle methods (t.before(), t.after())
Skip tests (t.skip())
Isolate subsets (t.runOnly(true))
“Note:runOnlywill only work when starting node with the--test-onlyflag. With this flag, Node skips all top level tests except for the subset specified.”
There are two ways to configure this:
The only flag in the test options. For example:
An alternative pattern for writing tests is to start and stop a server inside the tests. This would be considered as a typical integration test. Below is the code from the file in the tests folder - http-example.test.js :
This example introduces a few new additions. The first thing is that we're creating the Fastify app and listening to the server we created.
From the test runner, we're also using the describe/it pattern to organise the tests. it is an alias for the test module that we've seen in the other test. To add the test lifecycle behaviour, we can import the before and after functions and use them directly.
If we run either of the test scripts, we'll see both of these tests fail.
This makes sense. fastify is returning a 404 because there is no handler for the todos route. We can get that test green & passing by creating the todos handler in index.js .
Running either of the test scripts should output something similar to the following:
Let's briefly examine the test scripts from the package.json .
We use the --test flag to tell Node that we want to use the test runner. We also pass a test directory. This is because we want the test runner to recursively search for test files to execute. Watch mode
File watching is one of the first things to think about when setting up a project. In the past, I would normally reach for a third party package such as nodemon . From Node.js v19, that no longer has to be the default, we can now use the experimental --watch flag.
Having this functionality baked in is a game changer. It allows us to create fast feedback loops for changes we make when writing software.
If we run either of these test scripts, we should see output that looks like the below:
By default the test runner uses the Tap reporter. Some people like it, some don't. I find myself using the spec reporter. It’s less verbose.
The test runner comes with 3 reporter options ( tap , spec and dot ). You can also build your own custom reporter if you're not happy with the defaults.
This feature is configured via the following flag: --test-reporter=spec If you run the tests with that flag enabled, you will see a different formatting of the test output.
Request schema validation
For this last example, we'll go full TDD and demonstrate how Fastify handles request & response validation. The first thing we'll have to do is write the failing tests. Below are some example failing tests for todos-post.test.js :
Now if we run this test file, both of these tests will fail as expected. We’ll see the same 404 from the previous test. So, let’s add the missing route in index.js .
We’ll also have to configure the route and send the correct status code. We will add some request schema validation to return an error when there is no name payload sent on the request.
If we run the tests now. We will see one test passing and one failing.
Response schema serialisation
Now let's finish up the success case response. We'll add response validation as well ensuring that responses match the schema shape we expect. i.e - having a message property.
With these changes, a re-run of the tests should show everything as green and passing, as we expect.
As mentioned in the introduction, at NearForm we're very excited by Node.js features such as the test runner and we're actively contributing to this area of Node core.
I hope this article gives you an understanding of what a modern workflow with the test runner looks like. As Node.js approaches the big version 20 milestone, the future's looking bright.