Dyno Shutdown Behavior
Last updated December 03, 2024
Table of Contents
This article describes the behavior of dynos when they shut down on Heroku.
SIGTERM Signal
When the dyno manager restarts a dyno, the dyno manager requests that your processes shut down gracefully by sending them a SIGTERM
signal. This signal sends to all processes in the dyno.
It’s possible for processes being shut down to receive multiple SIGTERM
s.
The application processes have 30 seconds to shut down cleanly, the quicker the better. If any processes remain after that time period, the dyno manager terminates them forcefully with SIGKILL
. When performing controlled or periodic restarts, the dyno manager spins up new dynos as soon as it sends the shutdown signals to the old dynos.
During the 30-second timeframe, use the signal to shut down gracefully. Try to prevent your processes from accepting new requests or jobs. Have them finish their current requests, or put jobs back on the queue for other worker processes to handle.
Graceful Shutdown Example
We can see how to shut down gracefully with a sample worker
process. This example uses Ruby but the mechanism is identical in other languages. Imagine a process that does nothing but loop and print out a message periodically:
STDOUT.sync = true
puts "Starting up"
trap('TERM') do
puts "Graceful shutdown"
exit
end
loop do
puts "Pretending to do work"
sleep 3
end
If we deploy this code, along with the appropriate Gemfile and Procfile, and run heroku ps:scale worker=1
, we see the process in its loop running on dyno worker.1
:
heroku logs
2024-10-31T23:31:16+00:00 heroku[worker.1]: Starting process with command: `bundle exec ruby worker.rb`
2024-10-31T23:31:17+00:00 heroku[worker.1]: State changed from starting to up
2024-10-31T23:31:17+00:00 app[worker.1]: Starting up
2024-10-31T23:31:17+00:00 app[worker.1]: Pretending to do work
2024-10-31T23:31:20+00:00 app[worker.1]: Pretending to do work
2024-10-31T23:31:23+00:00 app[worker.1]: Pretending to do work
Run heroku restart
on the dyno, which causes the dyno to receive SIGTERM
:
heroku restart --dyno-name worker.1
Restarting worker.1 process... done
heroku logs
2024-10-31T23:31:26+00:00 app[worker.1]: Pretending to do work
2024-10-31T23:31:28+00:00 heroku[worker.1]: State changed from up to starting
2024-10-31T23:31:29+00:00 heroku[worker.1]: Stopping all processes with SIGTERM
2024-10-31T23:31:29+00:00 app[worker.1]: Graceful shutdown
2024-10-31T23:31:29+00:00 heroku[worker.1]: Process exited
app[worker.1]
logged “Graceful shutdown”, as we expect from our code.
SIGKILL Shutdown Example
If we modify worker.rb
to ignore the TERM
signal, like so, we can simulate what happens when processes don’t shut down cleanly within 30 seconds:
STDOUT.sync = true
puts "Starting up"
trap('TERM') do
puts "Ignoring TERM signal - not a good idea"
end
loop do
puts "Pretending to do work"
sleep 3
end
We see the behavior changes:
heroku restart --dyno-name worker.1
Restarting worker.1 process... done
heroku logs
2024-10-31T23:40:57+00:00 heroku[worker.1]: Stopping all processes with SIGTERM
2024-10-31T23:40:57+00:00 app[worker.1]: Ignoring TERM signal - not a good idea
2024-10-31T23:40:58+00:00 app[worker.1]: Pretending to do work
2024-10-31T23:41:01+00:00 app[worker.1]: Pretending to do work
...
2024-10-31T23:41:25+00:00 app[worker.1]: Pretending to do work
2024-10-31T23:41:27+00:00 heroku[worker.1]: Error R12 (Exit timeout) -> Process failed to exit within 30 seconds of SIGTERM
2024-10-31T23:41:27+00:00 heroku[worker.1]: Stopping all processes with SIGKILL
2024-10-31T23:41:28+00:00 heroku[worker.1]: Process exited
Our process ignores SIGTERM
and continues processing. After 30 seconds, the dyno manager gives up on waiting for the process to shut down gracefully, and kills it with SIGKILL
. It logs error R12 - Exit Timeout to indicate that the process is behaving incorrectly.
heroku restart --dyno-name web-abc123cde1
CLI and API Behavior
In the Common Runtime, if you run ps:stop
, or perform the Dyno Stop API call on dynos that are part of a scaled process, the dyno manager automatically stops and restarts them.
In Cedar-generation Private Spaces, the CLI command and API call terminates and replaces the dedicated instance running the dyno(s) instead.
In Fir spaces, dynos are powered by Kubernetes pods. When the CLI command and API call initiates, the corresponding dyno(s) gets terminated and replaced.
To permanently stop dynos, scale down the process with heroku ps:scale
or use the Formation Batch Update API call.
Additional Reading
- Dyno Startup Behavior
- Dyno Restarts
- The Dyno Behavior category