Object oriented technology was great – now enter microservices with Node.js
By Matteo Collina

Once upon a time, I learned programming on my father’s lap (yes, I was young). Back then, I didn’t worry too much about architecture. I just wanted to write video games so that I could play them.

Later on, I studied software engineering, databases and programming at university, and over the years my professors repeated one example several times.

In my first year, they asked me to write a Student class object. During my engineering course, when we studied UML, they asked me to model the IT system for the university. During my course on databases, we discussed the entity-relationship model and the relevant queries for that same university IT system. The following examples use this system. (As all developers know, UML is the most beautiful programming language ever…).

Figure 1 - The Student class

Figure 1 – The Student class

The first version of any university system is a Student class (Fig. 1). A Student class contains a name, a surname, some email addresses, and it has the key responsibility of saving itself. My professors at university would not have given me a good mark for that. In fact, I would have had to iterate to version 2, splitting out an Email class (which is responsible for sending emails!) and then iterate to version 3 by introducing a Person class.

Our Person class is the key component of any model-view-control system (such as Ruby on Rails, Django, Loopback, Spring MVC): the model. Models can be saved, loaded and generally persisted; they hide the database from the developer. We developers do not like databases, so we try to wrap them in nice abstraction that we can work with easily. In the last twenty years, we preferred to work with objects and classes to model our apps, but we should remind ourselves what an object is.

In software, objects are composed of states and their behaviors. In most languages, these behaviors are implemented as fields and methods. Fields (or properties) contain the state, and methods (or functions) perform an action that depends on the object’s internal state. A class is just the specification of an object type: all instances of the Student class will share the same properties and methods.

Objects provide developers with five main tools: encapsulation, accessors, abstraction, inheritance and polymorphism. Encapsulation is extremely important, and in fact we do not need objects at all – we can just use closures. Accessors are extremely useful for accessing encapsulated values. I believe I spent weeks filling up the accessor wizard when I was working as a Java developer (Fig. 2). Accessors are necessary for providing computed values, but are usually hard to distribute (should we serialize the value or no? Is that picked by JSON.stringify?). Inheritance, abstraction and composition lead to massive class diagrams in UML terms (Fig. 6); however, those diagrams convey very little of the interaction going on between the classes, and what business logic is captured there.

Figure 2 - Creating Java accessors in Eclipse

Figure 2 – Creating Java accessors in Eclipse

After implementing several systems around the concept of objects, we can confirm that objects are really good. However, modelling the interactions between classes can lead to systems that are extremely complex to understand (Fig. 3). In fact, the reasoning around classes makes distributed systems extremely complex; they are hard to remove, and it is difficult to provide references to instances running on a different memory space (or process). In the past, we developed Java RMI, CORBA, SOAP (also known as the death star) and to some extent even REST.

Figure 3 - A gigantic class diagram

Figure 3 – A gigantic class diagram

All the remote technologies that I mentioned in the previous paragraph have the same underlying principle: there is a public API that is available over the network, and clients send messages to it. APIs can either be good or bad. Good APIs are a form of documentation, while bad APIs are created for the sole purpose of distributing software. I firmly believe that writing code for the sake of writing code is pure technical debt. It adds little value to our finished product. In fact, I am a lazy developer and I think that no code is better than any code (because we do not need to maintain it!). I recommend that everybody watch this talk.

Maybe we are doing it wrong. We have been designing models and classes, but maybe we should be modeling messages and interaction instead. Alan Kay, who coined the term ‘object-oriented programming’ said: “The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be”. Designing message exchanges rather than classes allows us to represent more clearly the business value of our code. In fact, the diagram above represents a library that has very few responsibilities: store, fetch, update and delete an entity in a general way. In the past, I used that gigantic library to persist my Student model.

How can we define a message in 2015 and (soon) 2016? We can imagine it composed by maps, arrays, strings and numbers – in other words, a JSON. However, JSON is not complete, as it excludes binary data and data streams. JSON is currently the industry standard for sending data across the wire, but we can use alternatives like msgpack for handling binary data. An example of a message is shown in Figure 4.

 

{
  person: {
    name: 'Matteo'
    surname: 'Collina'
  }
}

Figure 4 – An example of message

In fact, we can easily implement the recipient of that message using the Node.js callback style (Figure 5):

recipient(message, function (err, result) {
 console.log(err, result)
})

Figure 5 – A message recipient

How do we know who is the recipient? We could design a system similar to Java RMI or the death* service registries, but none of those approaches worked in practice. A completely different solution is to encode the intention of this message within the message itself (Fig. 6):

{
  role: 'person',
  cmd: 'save'
  person:  {
    name: 'Matteo'
    surname: 'Collina'
  }
}

Figure 6 – The message, including the action to be performed

So, we have defined our message. Now we need to implement the logic that acts on the message and produces some output. This is a straight implementation of the command pattern from the Gang of Four. Our callback acts as a receiver and the logic that picks the right recipient is the invoker. We could implement that using a huge switch/case or a massive if/else. But we can do better: we can use pattern matching (see Figure 7):

Figure 7 - Command pattern in UML

Figure 7 – Command pattern in UML

Pattern matching is a technique commonly used in functional and declarative languages; for example, it is core in Erlang and Prolog. Pattern matching in non-functional languages is tricky, but at nearForm, we developed two libraries for doing it: patrun and bloomrun. Figure 8 contains an example of how bloomrun is used.

var i = bloomrun()
i.add({ cmd: 'save' }, function save (arg, cb) {
  alert('saving ' + JSON.stringify(arg))
  cb(null, true) 
})
  
var msg = {
  cmd: 'save',
  person: { name: 'matteo' } 
}
i.lookup(msg)(msg, console.log)

Figure 8 – Example of bloomrun in use

We call this way of composing software a microservice, and we wrote our own framework for it. Seneca features multiple transports, from bare TCP to busses. It allows you to build a monolith and then split it away into multiple process, without writing any code to support remote access.

In Seneca, we can start writing a simple script that actually calls itself. This is a simple piece of code that stores ‘devices’ – things with a name and a property – on a fake, in-memory database. There is no need for a complex ORM.

This piece of code is not really reusable, so we can split it out as a Seneca plugin:

'use strict'

var uuid = require('uuid')
var plugin = 'devices'
var devices = {}

function build () {
  var seneca = this

  seneca.add({
    role: plugin,
    cmd: 'save'
  }, function (msg, done) {
    var device = {
      id: uuid.v4(),
      name: msg.name,
      properties: msg.properties
    }

    devices[device.id] = device
    done()
  })

  seneca.add({
    role: plugin,
    cmd: 'list'
  }, function (msg, done) {
    done(null, Object.keys(devices).map(function (id) {
      return devices[id]
    }))
  })

  return 'devices'
}

module.exports = build
'use strict'

var seneca = require('seneca')()

seneca.use('devices')

seneca
  .act({
    role: 'devices',
    cmd: 'save',
    name: 'mything'
  })
  .act({
    role: 'devices',
    cmd: 'save',
    name: 'anotherthing'
  })
  .act({
    role: 'devices',
    cmd: 'save',
    name: 'abcde',
    properties: {
      color: 'red'
    }
  })
  .act({
    role: 'devices',
    cmd: 'list'
  }, console.log)

And in fact, we can use the remote capabilities of Seneca to expose it on the network and call remotely:

'use strict'

var seneca = require('seneca')()

seneca.use('devices')

seneca
  .act({
    role: 'devices',
    cmd: 'save',
    name: 'mything'
  })
  .act({
    role: 'devices',
    cmd: 'save',
    name: 'anotherthing'
  })
  .act({
    role: 'devices',
    cmd: 'save',
    name: 'abcde',
    properties: {
      color: 'red'
    }
  })
  .act({
    role: 'devices',
    cmd: 'list'
  }, console.log)
'use strict'

var seneca = require('seneca')()

seneca.use('devices')
seneca.listen()

What’s more, it is highly integrated with Hapi, thanks to Chairo, so we can serve that microservice through standard REST calls to improve communication with other teams:

'use strict'

var Chairo = require('chairo')
var Hapi = require('hapi')
var Joi = require('joi')

var server = new Hapi.Server()
server.connection({ port: 3000 })

server.register({ register: Chairo }, function (err) {

  server.seneca.use('devices')

  server.route({
    method: 'GET',
    path: '/devices',
    handler: function (request, reply) {
      return reply.act({ role: 'devices', cmd: 'list' })
    }
  })

  server.route({
    method: 'POST',
    path: '/devices',
    config: {
      validate: {
        payload: {
          name: Joi.string().min(3).max(10)
        }
      }
    },
    handler: function (request, reply) {
      return reply.act({
        role: 'devices',
        cmd: 'save',
        name: request.payload.name
      })
    }
  })
})

server.start(function () {
  console.log('Server running at:', server.info.uri)
})

Finally, we can use lout to automatically create the API documentation, as shown in Figures 9 and 9.1:

'use strict'

var Chairo = require('chairo')
var Hapi = require('hapi')
var Joi = require('joi')

var server = new Hapi.Server()
server.connection({ port: 3000 })

server.register([
  require('vision'),
  require('inert'),
  { register: require('lout') },
  { register: Chairo } ], function (err) {

  server.seneca.use('devices')

  server.route({
    method: 'GET',
    path: '/devices',
    handler: function (request, reply) {
      return reply.act({ role: 'devices', cmd: 'list' })
    }
  })

  server.route({
    method: 'POST',
    path: '/devices',
    config: {
      validate: {
        payload: {
          name: Joi.string().min(3).max(10)
        }
      }
    },
    handler: function (request, reply) {
      return reply.act({
        role: 'devices',
        cmd: 'save',
        name: request.payload.name
      })
    }
  })
})

server.start(function (err) {
  if (err) {
    throw err
  }
  console.log('Server running at:', server.info.uri)
})

 

Figure 9

Figure 9 - Lout automatic rendering of API docs

Figure 9.1 – Using lout for automatic rendering of API docs

At the FullStack London and MuCon conferences, I did a live demo that you can watch here. The slide deck is available here.

My next blog post is based on my conference talks at Node Interactive in Portland, US on December 8 and 9. Watch this space!

Subscribe to our monthly newsletter!
join the discussion