Optimizing Node.js Application Concurrency
Last updated December 04, 2024
Table of Contents
Node has a limited ability to scale to different container sizes. It’s single-threaded, so it can’t automatically take advantage of additional CPU cores.
Node.js apps must fork multiple processes to maximize their available resources. This “clustering” is supported by the Node.js Cluster API. With the Cluster API, you can optimize your app’s performance across various dyno types.
Heroku Enterprise customers with Premier or Signature Success Plans can request in-depth guidance on this topic from the Customer Solutions Architecture (CSA) team. Learn more about Expert Coaching Sessions here or contact your Salesforce account executive.
Enabling Concurrency in Your App
We recommend that all applications support clustering. Even if you don’t anticipate running more than a single process, clustering offers greater control and flexibility over your app’s performance. Let’s take a look at how this works with a simple clustered app:
const cluster = require('node:cluster')
const http = require('node:http')
const process = require('node:process')
const numOfWorkers =
process.env.HEROKU_AVAILABLE_PARALLELISM || // for fir-based apps
process.env.WEB_CONCURRENCY || // for cedar-based apps
1
const port = process.env.PORT || 5006
if (cluster.isPrimary) {
console.log(`Primary ${process.pid} is running`)
for (let i = 0; i < numOfWorkers; i++) {
cluster.fork()
}
cluster.on('exit', (worker, code, signal) => {
console.log(`Worker ${worker.process.pid} died`)
})
} else {
http.createServer((req, res) => {
res.writeHead(200)
res.end('hello world\n')
}).listen(port)
console.log(`Worker ${process.pid} started`)
}
If you save the above code into a local file named server.js
, you can then execute it and observe the cluster:
$ HEROKU_AVAILABLE_PARALLELISM=4 node server.js
Primary 1779 is running
Worker 1781 started
Worker 1780 started
Worker 1783 started
Worker 1782 started
Tuning the Concurrency Level
Each app has unique memory, CPU, and I/O requirements, so there’s no such thing as a one-size-fits-all scaling solution.
Classic Buildpack Node.js Apps
The Heroku buildpack provides reasonable defaults through two environment variables: WEB_MEMORY
and WEB_CONCURRENCY
. You can override both to fit your specific application.
WEB_MEMORY
specifies, in MB, the expected memory requirements of your application’s processes. It defaults to 512 MB.WEB_CONCURRENCY
specifies the recommended number of processes to cluster for your application. It’s essentiallyMEMORY_AVAILABLE / WEB_MEMORY
.
Read more about configuring your application’s memory use when clustering.
Defaults
Common Runtime
Dyno Type | Number of Cluster workers |
---|---|
Eco, Basic, Standard-1X | 1 |
Standard-2X | 2 |
Performance-M | 5 |
Performance-L | 28 |
Performance-L-RAM | 15 |
Performance-XL | 31 |
Performance-2XL | 63 |
Private Spaces and Shield Private Spaces
Dyno Type | Number of Cluster workers |
---|---|
Private-S / Shield-S | 2 |
Private-M / Shield-M | 5 |
Private-L / Shield-L | 28 |
Private-L-RAM / Shield-L-RAM | 15 |
Private-XL / Shield-XL | 31 |
Private-2XL / Shield-2XL | 63 |
For Performance-L dynos, applications work well with the 28 workers suggested for its 14 GB of memory. Always test an application to see whether it can support that many concurrent processes.
These defaults are reasonable for most apps. In most cases, clustering more than one worker on a Standard-1x dyno hurts rather than helps performance. However, try any combination of WEB_CONCURRENCY
with any dyno size to see what works best for your workload.
Decreasing the WEB_MEMORY
increases WEB_CONCURRENCY
. Similarly, increasing WEB_MEMORY
reduces concurrency. When the size of your dyno changes, WEB_CONCURRENCY
is recalculated automatically to fill available memory.
You can also set WEB_CONCURRENCY
directly, but it prevents your app from automatically reclustering when you change dyno sizes.
Cloud Native Buildpack Node.js Apps
Heroku’s Node.js Cloud Native Buildpack provides a reasonable default for determining the number of cluster workers to use with the HEROKU_AVAILABLE_PARALLELISM
environment variable.
This value always matches the number of CPU cores listed for the Dynos in Fir.
Node.js also provides a built-in method for determining a reasonable number of clusters to use via os.availableParallelism()
but, as this container-aware APIs is relatively new, it doesn’t always produce the best values to use with Heroku dynos yet.