Background Jobs in Node.js with Redis
Last updated December 09, 2021
Table of Contents
It’s important for a web application to serve end-user requests as fast as possible. A good rule of thumb is to avoid web requests which run longer than 500ms. If you find that your app has requests that take one, two, or more seconds to complete, then you should consider using a background job instead. For a more in-depth understanding of this architectural pattern read the Worker Dynos, Background Jobs and Queueing article.
This can be even more important for Node.js servers where computationally expensive requests can block the Event Loop and prevent the server from responding to any new requests until the computation is complete. Separating this computation out into a different process keeps your web server light and responsive.
This article demonstrates using worker queues to accomplish this goal with a sample Node.js application using Bull to manage the queue of background jobs.
This article assumes that you have Redis (for local development) and the Heroku CLI installed.
Getting started
Follow the steps below to clone this application into your Heroku account:
Via Dashboard
- Click here to
- Scale the
worker
process under theResources
tab in the Dashboard to at least one dyno so that jobs are processed - Open your app in the browser to kick off new jobs and watch them complete
Via CLI
$ git clone git@github.com:heroku-examples/node-workers-example.git
$ cd node-workers-example
$ heroku create
$ heroku addons:create heroku-redis
$ git push heroku main
$ heroku ps:scale worker=1
$ heroku open
Application Overview
The application is comprised of two process:
web
- An Express server that serves the frontend assets, accepts new background jobs, and reports on the status us existing jobsworker
- A small Node.js process that executes incoming jobs
Because these are separate processes, they can be scaled independently based on specific application needs. Read the Process Model article for a more in-depth understanding of Heroku’s process model.
The web
process serves the index.html
and client.js
files which implement a simplified example of a frontend interface that kicks off new jobs and checks in on them.
Web process
server.js
is a tiny Express
server. The important bits to note are connecting to the Redis server and setting up the named work queue:
let REDIS_URL = process.env.REDIS_URL || 'redis://127.0.0.1:6379';
let workQueue = new Queue('work', REDIS_URL);
and kicking off the job when a POST
request comes in:
app.post('/job', async (req, res) => {
let job = await workQueue.add();
res.json({ id: job.id });
});
Generally you would not give clients direct access to kicking off background jobs like this, but this is kept simple for demonstration purposes.
Worker process
worker.js
spins up a cluster of worker processes using
throng. In this example the job simply sleeps for a bit before resolving, but this should be a good
starting point for writing your own workers.
There are two important concurrency concepts to understand here. The first is the number of workers:
let workers = process.env.WEB_CONCURRENCY || 2;
Each worker is a standalone Node.js process with an independent Event Loop. On Heroku dynos a default value for this is set for you in the
WEB_CONCURRENCY
environment variable. This will scale based on the amount of memory on the dyno, but may need to be tuned for the characteristics of
your specific application. Read Optimizing Node.js Application Concurrency to
learn more.
The second is the maximum number of jobs each worker should process at a time:
let maxJobsPerWorker = 50;
workQueue.process(maxJobsPerWorker, async (job) => {
// ...
});
Each worker is picking jobs off of the Redis queue and processing them. This setting controls how many jobs each worker will attempt to process at once.
This will likely need to be tuned for your application. If each job is mostly waiting on network responses, like an external API or service, it can likely
be much higher. If each job is CPU intensive, it might need to be as low as 1
. In this case we recommend trying to spin up more worker processes.
Client Web App
client.js
implements a tiny web frontend that allows you to kick off
jobs and watch them process. In production Bull recommends several official UI’s that can be used to monitor
the state of your job queue.
What you’ve learned here is only a small example of what Bull is capable of. It has many more features including:
- Priority queues
- Rate limiting
- Scheduled jobs
- Retries
For more information on using these features see the Bull documentation.