Managing VACUUM on Heroku Postgres
Last updated June 04, 2024
Table of Contents
Postgres uses a mechanism called MVCC to track changes in your database. As a side effect, some rows become “dead” and are no longer visible to any running transaction. Dead rows are generated not just by DELETE
operations, but also by UPDATE
s, as well as transactions that have to be rolled back.
Your database needs periodic maintenance to clean out these dead rows. It’s essentially a form of garbage collection. Typically, this maintenance happens automatically, but it can be useful to understand the details and tune the maintenance settings as needed.
Vacuuming a Database
The built-in mechanism for managing this cleanup is called VACUUM
. This can be run as a regular command, but Postgres also includes facilities for running the VACUUM
process automatically in the background as a maintenance task, periodically trying to clean out old data where necessary. This process performs its maintenance based on a set of configuration parameters.
The default Heroku configuration is enough for many applications, but in some situations, you need to make some changes or occasionally take manual action.
Determining Bloat
To check whether you need to vacuum, you can run a query to give you information on table and index “bloat”. Bloat is extra space taken by these database objects on disk due to dead rows. The simplest way to check is to install the pg-extras plugin for the Heroku CLI.
After installing, you can check bloat by running the following command:
$ heroku pg:bloat DATABASE_URL --app example-app
type | schemaname | object_name | bloat | waste
-------+------------+-------------------------+-------+-----------
table | public | users | 1.0 | 109 MB
table | public | logs | 1.0 | 47 MB
index | public | queue_classic_jobs_pkey | 3.1 | 25 MB
table | public | reviews | 2.2 | 16 MB
table | public | queue_classic_jobs | 32.5 | 1512 kB
...
The “bloat” column shows the bloat factor, which is the fraction of the original table that exists as bloat. Because it’s a ratio, there are no units. The “waste” column shows the total bloat (in bytes) in each table and index in the system.
A very large bloat factor on a table or index can lead to poor performance for some queries, as Postgres plans them without considering the bloat.
The threshold for excessive bloat varies according to your query patterns and the size of the table. Generally, anything with a bloat factor over 10 is worth looking into, especially on tables over 100 MB.
To check on vacuuming in your database, you can use another pg-extras command:
$ heroku pg:vacuum-stats DATABASE_URL --app example-app
schema | table | last_vacuum | last_autovacuum | rowcount | dead_rowcount | autovacuum_threshold | expect_autovacuum
--------+--------------------+-------------+------------------+----------------+----------------+----------------------+-------------------
public | queue_classic_jobs | | 2013-05-20 16:54 | 82,617 | 36,056 | 16,573 | yes
public | logs | | 2013-05-20 16:27 | 1 | 18 | 50 |
public | reviews | | 2013-05-20 01:36 | 87 | 0 | 67 |
public | users | | 2013-05-20 16:28 | 0 | 23 | 50 |
...
This tells you when each table was last vacuumed, and whether that was through a manual action or the autovacuum background worker. It also shows the threshold number of dead rows that triggers an autovacuum for that particular table, and whether you can expect an autovacuum to occur.
VACUUM Variants
You can contain bloat by ensuring that VACUUM
runs regularly. The autovacuum process only ever runs a regular, non-full VACUUM
command. You can reduce bloat if it gets out of hand by running VACUUM FULL
.
VACUUM FULL
offers a more exhaustive cleanup and reduces bloat rather than just flagging space as available as with the regular VACUUM
. However, it’s also a much more heavyweight operation. VACUUM FULL
rewrites the entire table, and prevents any other statements from running concurrently, such as simple SELECT
queries.
Generally, it’s a good idea to keep autovacuum in an aggressive-enough configuration so you never need to run VACUUM FULL
. Instead of running VACUUM FULL
, which can be time-consuming, you can run VACUUM
on a particular table instead. For example, VACUUM table_name
.
In some cases, where you’re using a table to only track transient data, such as a work queue, it’s useful to run the TRUNCATE
command instead. This command deletes all the data in the table in a batch operation. For very bloated tables, this can be much faster than a DELETE
and VACUUM FULL
.
Automatic Vacuuming with autovacuum
The most effective way to manage bloat is by tweaking autovacuum settings as necessary.
You can change when a table is eligible for VACUUM
from four settings. On Heroku, the changes can only be made on a per-table basis.
$ heroku pg:psql
=> ALTER TABLE users SET (autovacuum_vacuum_threshold = 50);
ALTER TABLE
=> ALTER TABLE users SET (autovacuum_vacuum_scale_factor = 0.2);
ALTER TABLE
=> ALTER TABLE users SET (autovacuum_vacuum_insert_threshold = 1000);
ALTER TABLE
=> ALTER TABLE users SET (autovacuum_vacuum_insert_scale_factor = 0.2);
ALTER TABLE
Postgres triggers an autovacuum when the number of dead rows since the last VACUUM
operation exceeds the defined threshold, and if the number of rows inserted since the last VACUUM
operation exceeds the defined insert threshold.
The vacuum threshold is a raw number of dead rows needed, and the vacuum scale factor is the fraction of live rows in the table that must exist as dead rows. The defaults are 50
and 0.2
.
The insert threshold is the minimum number of row inserts before a vacuum, and the insert scale factor is the fraction of inserts over table size before an insert vacuum. The defaults are 1000
and 0.2
.
Together, the settings make up the actual threshold and insert threshold according to the following formulas:
vacuum threshold = autovacuum_vacuum_threshold +
autovacuum_vacuum_scale_factor * number of rows
vacuum insert threshold = autovacuum_vacuum_insert_threshold +
autovacuum_vacuum_insert_scale_factor * number of inserts
On large tables, you want to decrease the scale factor to allow vacuum to start making progress earlier. For very small tables, you can increase the threshold, though it’s typically not necessary.
Furthermore, autovacuum has a built-in cost-based rate-limiting mechanism, to avoid having it overwhelm the system with VACUUM
activity. In busy databases, however, this mechanism can mean that autovacuum doesn’t make progress quickly enough, leading to excessive bloat.
To avoid that, you can change the back-off settings to be less deferential. These changes can be made at the database level.
$ heroku pg:psql
=> select current_database();
current_database
------------------
dd5ir2j6frrtr0
(1 row)
=> ALTER DATABASE dd5ir2j6frrtr0 SET vacuum_cost_limit = 300;
ALTER DATABASE
=> ALTER DATABASE dd5ir2j6frrtr0 SET vacuum_cost_page_dirty = 25;
ALTER DATABASE
=> ALTER DATABASE dd5ir2j6frrtr0 SET vacuum_cost_page_miss = 7;
ALTER DATABASE
=> ALTER DATABASE dd5ir2j6frrtr0 SET vacuum_cost_page_hit = 0;
ALTER DATABASE
The cost limit determines how much “cost” (in terms of I/O operations) autovacuum can accrue before being forced to take a break. The cost delay determines how long that break is (in milliseconds). These settings affect both autovacuum and manual vacuum (autovacuum-only variants do exist, but they can only be set on a per-table basis on Heroku Postgres at this time). The cost limit is set to 200
by default. Increasing the cost limit (up to 1000
or so) or adjusting the vacuum_cost_page_*
parameters can help the autovacuum progress more efficiently.
Manual Vacuuming
If your database happens to have a very periodic workload, it can be more efficient to use a simple worker process to “manually” run a VACUUM
(or even VACUUM FULL
, if the locking isn’t an issue) and trigger it with a tool like Heroku Scheduler during off-peak hours.
A manual VACUUM
doesn’t have a threshold for when it “kicks in”. It’s always triggered by running the VACUUM
command. The cost-based back-off (as with autovacuum) still applies, but it’s turned off by default (the vacuum_cost_delay
is set to 0). You can increase this value on a per-table basis if you find that a manual VACUUM
has too much impact on your regular workload.
To run VACUUM
, open a psql shell to the desired database and type the command:
$ heroku pg:psql
=> VACUUM;
WARNING: skipping "pg_authid" --- only superuser can vacuum it
WARNING: skipping "pg_database" --- only superuser can vacuum it
WARNING: skipping "pg_tablespace" --- only superuser can vacuum it
WARNING: skipping "pg_pltemplate" --- only superuser can vacuum it
WARNING: skipping "pg_auth_members" --- only superuser can vacuum it
WARNING: skipping "pg_shdepend" --- only superuser can vacuum it
WARNING: skipping "pg_shdescription" --- only superuser can vacuum it
WARNING: skipping "pg_db_role_setting" --- only superuser can vacuum it
VACUUM
The warnings you see are expected and can be ignored. You can also restrict VACUUM
to a particular table, if only one or two need manual vacuuming:
$ heroku pg:psql
=> VACUUM users;
VACUUM
When running VACUUM
, you can add the VERBOSE
keyword to get more details about its progress.
$ heroku pg:psql
d7lrq1eg4otc3i=> VACUUM VERBOSE;
INFO: vacuuming "public.reviews"
INFO: index "reviews_pkey" now contains 0 row versions in 1 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: index "reviews_user_index" now contains 0 row versions in 1 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: "users": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
...
VACUUM
With carefully managed autovacuum settings, manual vacuuming is rarely necessary, but it’s important to understand how it works.