The diagram above illustrates what is perhaps the single most important concept that you will ever need to know about the performance of Node.js applications, so study it well.
Think about it carefully because it's actually a bit of a trick question!
The answer? All of it!
So what does this have to do with "broken promises"?
In our experience, the overwhelming majority of cases we see with our customers are applications that allocate thousands upon thousands of synchronously-resolved promises in tight synchronous loops or hot code paths that are repeatedly executed. In one extreme example, for instance, I worked with one customer who created over 30,000 synchronously-resolved promises in a single for-loop that ended up blocking the Node.js event loop for over a minute! The worst part was that only a very small part of that code actually scheduled asynchronous work, meaning that most of the promises created were wasted allocations.
catch functions were being put immediately into the microtask queue that is immediately drained after the for loop exits and control returns back to the native layer function. Those thousands of
then handlers would each schedule additional
then handlers which would also be put into the microtask queue, and drained, and so on. Because most of those were resolved synchronously, all of this would simply cause the event loop to be blocked waiting for the native layer function to finally return control back to it so it could move on to the next thing.
Let's return to the three examples running side by side that schedule code in the same order but print different results: