This article was contributed by Abhinav Keswani
Abhinav heads up the Bespoke Solutions team at Trineo, a boutique software development shop based in New Zealand and Australia. After way too many years wrangling the vagrancies of massive public facing web infrastructure, Abhinav and the Trineo team focus on developing applications that use modern cloud technology. Trineo are listed Heroku Dev Partners, and Salesforce Consulting Partners.
Queuing in Ruby with Redis and Resque
Last updated 07 March 2017
Table of Contents
This article may be out of date.
The delegation of long running, computationally expensive and high latency jobs to a background worker queue is a common pattern used to create scalable web applications. The aim is to serve end-user requests with fastest possible responses, by ensuring that all long running jobs are handled outside the request/response cycle.
Isolated longer running jobs can be handled by a separate worker pool that can scale independently. These jobs can asynchronously handle tasks like fetching data from remote APIs, reading RSS feeds or handling long running uploads to S3.
Resque (pronounced ‘rescue’) is a Redis-backed library for creating background jobs, placing those jobs on multiple queues, and handling those jobs outside of the user request/response cycle.
Resque is designed to be used in scenarios that require a high volume of job entries, and provides mechanisms to ensure visibility and reliability of behavior while providing statistics using a web dashboard.
Consider Resque as a good choice for applications that run multiple queues each with many thousands of job entries, where worker behavior can be volatile. Volatile worker behavior is mitigated by forking children processes to handle jobs which ensures that any out of control workers can be dealt with in isolation.
To install Redis locally, use the following approach on Mac OS X.
$ brew install redis
To see if you’re already running Redis, check the output of the
redis-cli ping command. A
PONG response indicates you already have Redis installed and running.
To test your install, firstly ensure that the Redis server is running.
Then use the following simple commands.
$ redis-cli ping PONG $ redis-cli redis 127.0.0.1:6379> set foo bar OK redis 127.0.0.1:6379> get foo "bar"
To install Redis on Linux, and to see more details on an effective quickstart, refer to the documentation on the Redis site.
The example application mentioned later in this article has a dependency on ImageMagick. To install ImageMagick on OS X:
$ brew install imagemagick
Resque on Heroku
Prior to v1.22 there was a disparity between Resque’s signal handling and Heroku’s process management that caused orphaned and incomplete jobs between worker restarts. Starting with v1.22 Resque’s handling of TERM and QUIT signals is in accordance with UNIX standards and maintains compatibility with the Heroku runtime that previous versions lacked.
To run Resque on Heroku you need to specify the correct Resque version and provide a couple options to the worker process.
In your application
Gemfile specify Resque v1.22.0.
gem 'resque', "~> 1.22.0"
bundle install to update your gems.
To opt-in to UNIX compatible signal handling in Resque v1.22 you will also need to provide a
TERM_CHILD environment variable to the resque worker process. This can be done inline in your application
resque: env TERM_CHILD=1 bundle exec rake resque:work
TERM_CHILD tells Resque to pass
SIGTERM from the parent to child process to ensure that all child worker process have time to execute an orderly shutdown.
The default period Resque waits before sending
SIGKILL to its child processes is 4 seconds. To modify this value and give your workers more time to gracefully shutdown, modify the
RESQUE_TERM_TIMEOUT environment variable.
resque: env TERM_CHILD=1 RESQUE_TERM_TIMEOUT=7 bundle exec rake resque:work
For the remainder of this article, a reference application will be cited to illustrate the working parts of an application that uses Resque.
The main premise of the application is to watermark uploaded images and store them on AWS S3. When a user uploads an image, it is saved to S3, and then a Resque job is enqueued to create a watermarked copy of themis user’s image, which is also stored on AWS S3.
The source for this reference application can be found on GitHub
To set this app up locally firstly fork it on GitHub and then clone it locally, for example:
$ git clone git://github.com/trineo/resque-example.git
To install application dependencies run:
$ bundle install
Sign up to AWS S3 if you haven’t already done so, following these instructions.
To set up your local environment, and connectivity to all backing services, you need to create a .env file.
$ cp .env.example .env
Create two buckets on S3 that respectively will be used to store original and watermarked images. Edit the .env file appropriately to specify:
- the names of these buckets
- AWS S3 secret access key, and access key id
- your local Redis URL (already set to the default Redis value)
Running the app locally
Start Redis in a separate shell.
Then start the watermarked application.
$ heroku local
This should now start a web process as well as a worker process, both of which are specified in the Procfile of the application.
Verify and use local setup
Using the Resque dashboard, check that the app is running:
- Visit the following URL: http://localhost:5000/resque
- This page should load the Resque web dashboard and say “0 of 1 Workers Working”
- Visit the following URL: http://localhost:5000/resque/workers
- See what the worker is up to on this page, and more upon clicking on the worker link
To interact with the application and queue a job, get a test image and run the following curl command (where
src.jpeg is the test image):
$ curl -F "firstname.lastname@example.org" "http://localhost:5000/upload"
This should have the following effect:
- The worker should show it is working
- An image called src.jpeg should be present in both “originals” and “watermarked” buckets
- The image in the watermarked bucket should be … watermarked!
- The index page of the reference app (http://localhost:5000) should show some statistics about file upload and watermarking activity
Jobs are queued by the upload post method in the main Sinatra file,
resque-example-app.rb. The upload method saves the file to S3 and then enqueues a job to watermark the uploaded file.
Now, any worker that is bound to the same Redis backing service can process this job.
The enqueue method has the following method signature:
In calling enqueue, the
perform method of
klass is called, with all the original
enqueue arguments. These arguments are serialized as JSON, and therefore it is important to ensure that the arguments can in fact be serialized as JSON. Arguments like symbols, or entire ActiveRecord objects will not work. Try instead to send object IDs or as is the case in the example application, a AWS file token key is sent.
The Watermark worker
The Watermark class is responsible for watermarking files in the background. Once a job has been enqueued as shown above, and when a worker is ready, the “perform” method of the Watermark class is invoked with the S3 file token being sent as an argument.
Watermark#perform is a class method that instantiates a new instance of this class, such that useful class variables are set up.
def self.perform(key) (new key).apply_watermark end
Perform then chains the invocation of the ‘apply_watermark’ method on this new instance, which then applies and saves the watermarked file.
def apply_watermark Dir.mktmpdir do |tmpdir| ... save_watermarked_file(watermarked_local_file) end end def save_watermarked_file(watermarked_local_file) ... end
Note that it is considered bad practice to store any files on the Heroku filesystem. What is happening in the reference application, is:
- The file is transient, which is to say that the filesystem is used to store the file so that it can be watermarked, and once this is complete the file is deleted. (refer to Dir#mktmpdir)
- Local file processing that is briefly conducted in the background, using the filesystem of a process is not considered to be bad practice (reference)
Define and provision workers
Resque workers are defined the reference application’s Procfile:
resque: env TERM_CHILD=1 bundle exec rake resque:work
These workers can be provisioned by running a simple scaling command such as:
heroku ps:scale resque=2 --app appname
Alternatively, one can run multiple worker threads within a single process/dyno by modifying the Procfile accordingly:
resque: env TERM_CHILD=1 COUNT=2 bundle exec rake resque:workers
Note two main differences: * The COUNT variable specifies the number of worker threads to run * The resque:workers rake task is invoked
As per prior instructions, start the application, and issue the given curl command to post a file to the upload handler. Subsequently, this is the log stream generated by the web and resque workers.
20:47:15 web.1 | 127.0.0.1 - - [14/Aug/2012 20:47:15] "POST /upload HTTP/1.1" 200 - 3.1093 20:47:19 resque.1 | Initialized Watermark instance 20:47:19 resque.1 | Opening original file locally: /var/folders/kq/src.jpeg 20:47:19 resque.1 | Writing watermarked file locally: /var/folders/kq/watermarked_src.jpeg 20:47:22 resque.1 | Persisted watermarked file to S3: https://your-watermarked-bucket.s3.amazonaws.com/src.jpeg
Resque leaves it to the developer to decide on how to best handle job failure, using a ‘on_failure’ hook. A common approach is to re-enqueue a job if it fails. The reference application shows a simple example of this in the context:
module RetriedJob def on_failure_retry(e, *args) ... Resque.enqueue self, *args end end class Watermark extend RetriedJob ... end
More complex implementations would involve stipulating a number of retries, so that a job is given
n number of attempts before giving up, and additionally stipulating a delay between retries. The resque-retry plugin provides much of this functionality.
Failed jobs are visible on the Resque dashboard, which is described below.
When worker termination is requested via the
SIGTERM signal, Resque throws a
Resque::TermException exception. Handling this exception allows the worker to cease work on the currently running job and gracefully save state by re-enqueueing the job so it can be handled by another worker.
Handling this exception on Heroku is recommended due to the frequency of dyno restarts and the likelihood that high-throughput worker processes will encounter termination requests.
The reference application indicates the method to cleanly shut a worker down on receiving the SIGTERM signal.
require 'resque/errors' ... def self.perform(key) (new key).apply_watermark rescue Resque::TermException Resque.enqueue(self, key) end
There are a number of ways to introspect Resque’s behavior. The best place to do this, the in-built Resque web dashboard. As mentioned earlier, the reference application exposes the dashboard here: http://localhost:5000/resque.
The dashboard allows you to inspect the following:
- failed jobs and stack traces
- current working jobs
- useful redis stats
Using the console, you can inspect the above by using the following commands:
Resque.info Resque.queues Resque.redis Resque.size(queue_name) Resque.peek(queue_name, start=1, count=1) Resque.workers Resque.working
Finally, the reference application itself has a small dashboard on its index page that shows file upload and watermarking activity: http://localhost:5000.
To deploy the reference application to Heroku, after it has been cloned locally, firstly create a new Heroku app, and provision a new instance of Redis:
$ heroku create --stack cedar-14 $ heroku addons:create rediscloud
Push the app to heroku.
$ git push heroku master
Set up the required AWS bucket and security config vars.
$ heroku config:set \ > AWS_S3_BUCKET_ORIGINALS=<insert your originals bucket name> \ > AWS_S3_BUCKET_WATERMARKED=<insert your watermarked bucket name> \ > AWS_ACCESS_KEY_ID=<insert your access key id> \ > AWS_SECRET_ACCESS_KEY=<insert your secret access key>
Check that this worked.
$ heroku config | grep AWS AWS_ACCESS_KEY_ID: access key id AWS_S3_BUCKET_ORIGINALS: your originals bucket name AWS_S3_BUCKET_WATERMARKED: your watermarked bucket name AWS_SECRET_ACCESS_KEY: secret access key
Scale up the web and background workers.
$ heroku scale web=1 resque=1
Open the app in a browser.
$ heroku open
Open the resque web dashboard in a new browser tab using the “Resque Dashboard” link.
Open your AWS Management Console, and open the watermarked bucket.
Taking note of your heroku app’s name (eg vast-sierra-1234), post an image to the upload endpoint.
$ curl -F "email@example.com" "http://appname.herokuapp.com/upload"
A watermarked version of the image that was posted should be in your watermarked bucket in a few seconds (refresh the bucket view in the AWS console).
Furthermore, check the index page of the reference application to see a list of public URLs of recently watermarked images. Or, by examining the app logs.
$ heroku logs --tail
Look for a line such as this that indicates a public URL that indicates the location of your watermarked image.
app[resque.1]: Persisted watermarked file to S3: ...
Visit the public URL to see the watermarked image.