Are you heading along to this year’s FullStack London? If so, be sure to catch talks by Matteo Collina and David Mark Clements!
Components at Organisational Scale – Day 2, 10:30 am
How can you prevent Conway’s Law from leading to vulnerabilities & bloat when distributing component responsibilities across a large organisation? David and his team created a distributed live-build system to implement a Components-as-a-Service platform for a company with 15000+ employees and sales of over $50B pa.
The distributed management of front end components, from version control to component-level a/b testing to distributed state strategy to scenario-aware component mutation all requires intensive communication between teams; usually resulting in monotonous error-prone maintenance tasks.
One common way organisations attempt to mitigate the communication overhead is to deploy view services that serve HTML to iframes. This embraces a Continuous Deployment strategy and allows for autonomy in individual teams. It tends to result in bloat through duplication, complex inefficient approaches to intra-component communication, poor rendering performance, intensive resource usage on user devices and a high risk of potential security issues resulting from human error and misunderstanding.
The Cost of Logging (Intermediate) – Day 2, 15:15 pm
During this talk with both David & Matteo, you will explore building Pino, a JSON\n logger that’s up to 17 times faster than pre-existing loggers, with a growing ecosystem of support libraries, including high-performance integration with Express, Hapi, Koa and Restify.
How did we make it so fast? After showing what Pino can do, you will discover Matteo and David’s tooling: 0x for flamegraphs, and autocannon for HTTP/1.1 benchmarking. Then we’ll discuss V8 optimising compilers, string flattening and other mad-science optimisations that we embedded inside Pino.
Take your HTTP server to Ludicrous Speed – Day 3, 12:00 noon
Express, Hapi, Restify, or just plain Node.js core? Which framework should you choose? In his journey in Nodeland, Matteo always wonders about the cost of his abstractions. require(‘http’) can reach 25k requests/sec, Express 9k, and Hapi 2k.
He started a journey to write an HTTP framework with extremely low overhead, and Fastify was born. With its ability to reach an astonishing 20k requests/sec, Fastify can half your cloud server bill.
How can Fastify be so… fast? Join Matteo and start by analysing flamegraphs with 0x, and then delve into –v8-options, discovering how to leverage V8’s feedback and optimise your code. He will explore function inlining, optimisations and deoptimizations. You will learn about the tools and the libraries you can use to do performance analysis on your code. In Fastify you reach a point where even allocating a callback is too slow: Ludicrous Speed.