Improving Node.js Concurrency with Cluster

Last Updated: 16 June 2015

concurrency dyno memory node scaling

Table of Contents

Node’s event-driven asynchronous design allows a single node process to handle multiple requests concurrently. Node’s evented architecture is similar to, and influenced by, systems like Ruby’s Event Machine and Python’s Twisted, but node takes the event model a bit further by presenting the event loop as a language construct instead of as a library. For many apps, a single node process is more than enough to handle the request volume.

For more heavily trafficked node apps running on multi-core systems (like those on Heroku), it’s possible to achieve greater concurrency by taking advantage of node core’s cluster module, which automatically load-balances incoming connections across multiple processes. This allows you to create child processes that share server ports and make better use of dyno resources.

When using Cluster, you should consider taking advantage of the built-in WEB_CONCURRENCY feature that Heroku provides to manage concurrency across the platform.

Using Cluster

The Node.js docs provide an excellent example of ‘vanilla’ Cluster usage. However, several libraries have been built on top of Cluster to abstract away complexities like worker lifecycle management and signal management.