Understanding Heroku Postgres Data Caching
Last updated April 01, 2024
Table of Contents
Data caching in Postgres isn’t preallocated or guaranteed. Instead, it’s only estimated and varies widely depending on your workload. Heroku Postgres plans have a certain amount of system RAM, much of which is used for caching, but users can see slightly better or worse caching in their databases. A well-designed application serves more than 99% of queries from cache. This article provides an overview of how Postgres caches.
How Does PostgreSQL Cache Data?
While Postgres does have a few settings that directly affect memory allocation for caching, most of the cache that Postgres uses is provided by the underlying operating system. Postgres, unlike most other database systems, makes aggressive use of the operating system’s page cache for a large number of operations.
As an example, we provision a server with 7.5 GB of total system memory. Of these 7.5 GB, some small portion is used by the operating system kernel. Some smaller portion of the system memory is used for other programs, including Postgres. The rest, measured between 80% and 95% of that system memory, is used for caching of data by the operating system.
We observed that the memory footprint of a Heroku Postgres instance’s operating system and other running programs is 500 MB on average. The costs are mostly fixed regardless of plan size. There are a few distinct ways in which Postgres allocates this bulk of memory, and the majority of the memory is typically left for the operating system to manage.
Shared Buffer Cache
Postgres manages a “Shared Buffer Cache”, which it allocates and uses internally to keep data and indexes in memory. This allocation is configured to be about 15-25% of the total system memory for a server running a dedicated Postgres instance, such as all Heroku Postgres instances. The rest of the available memory is used by Postgres for two purposes: to cache your data and indexes on disk via the operating system page cache, and for internal operations or data structures.
The allocation for the shared buffer cache depends on the database plan.
Plan | Allocation |
---|---|
standard-0 premium-0 private-0 shield-0 | 1 GB |
standard-2 premium-2 private-2 shield-2 | 2 GB |
standard-3 premium-3 private-3 shield-3 | 4 GB |
standard-4 premium-4 private-4 shield-4 | 8 GB |
standard-5 premium-5 private-5 shield-5 | 9 GB |
standard-6 premium-6 private-6 shield-6 | 19 GB |
premium-l-6 private-l-6 shield-l-6 | 19 GB |
premium-xl-6 private-xl-6 shield-xl-6 | 19 GB |
standard-7 premium-7 private-7 shield-7 | 39 GB |
standard-8 premium-8 private-8 shield-8 | 78 GB |
standard-9 premium-9 private-9 shield-9 | 117 GB |
premium-l-9 private-l-9 shield-l-9 | 117 GB |
premium-xl-9 private-xl-9 shield-xl-9 | 117 GB |
standard-10 premium-10 private-10 shield-10 | 157 GB |
To check the shared_buffers
value for your database, open a pg:psql
session and run:
SHOW shared_buffers;
In-Memory Cache
Data that has been recently written to or read from disk passes through the operating system page cache and is therefore cached in memory. In doing so, reads are served from the cache, leading to reduced block device I/O operations, and consequently higher throughput. However, there are a few Postgres operations that also use this memory, which invalidates the cache.
The most noteworthy are certain kinds of internal operations done to fulfill queries such as internal in-memory quicksorts and hash tables or group by operations.
work_mem
Every join in a query can use a certain amount of memory dictated by a Postgres configuration setting called work_mem
. The work_mem
setting in PostgreSQL sets the base maximum amount of memory to be used by a query operation, such as a sort or hash table, before writing to temporary disk files.
Each of these operations has the potential to use memory that would otherwise be used for data and index caching. However, these operations are also a form of caching in the sense that they avoid having to read the same information from disk to do their work.
The default configuration for work_mem
depends on the database plan.
Plan | Default Configuration |
---|---|
standard-0 premium-0 private-0 shield-0 | 8 MB |
standard-2 premium-2 private-2 shield-2 | 8 MB |
standard-3 premium-3 private-3 shield-3 | 8 MB |
standard-4 premium-4 private-4 shield-4 | 16 MB |
standard-5 premium-5 private-5 shield-5 | 32 MB |
standard-6 premium-6 private-6 shield-6 | 64 MB |
premium-l-6 private-l-6 shield-l-6 | 64 MB |
premium-xl-6 private-xl-6 shield-xl-6 | 64 MB |
standard-7 premium-7 private-7 shield-7 | 128 MB |
standard-8 premium-8 private-8 shield-8 | 256 MB |
standard-9 premium-9 private-9 shield-9 | 256 MB |
premium-l-9 private-l-9 shield-l-9 | 256 MB |
premium-xl-9 private-xl-9 shield-xl-9 | 256 MB |
standard-10 premium-10 private-10 shield-10 | 256 MB |
To check the work_mem
value for your database, open a pg:psql
session and run:
SHOW work_mem;
You can set a custom work_mem
value in the following ways:
- Per transaction:
SET LOCAL work_mem = '100 MB';
- Per session:
SET work_mem = '100 MB';
- Per role:
ALTER ROLE username SET work_mem = '100 MB';
- At the database level:
ALTER DATABASE database_name SET work_mem = '100 MB';
Changing the work_mem
setting on a per-role basis or at the database level only applies the change to new connections.
VACUUM and DDL
There are other operations requiring memory, such as running VACUUM
by yourself or using the autovacuum daemon, as well as all DDL operations. For DDL, creating indexes tends to consume large amounts of memory, which also temporarily uses up memory available for data and index caches.
Heroku Postgres plans vary primarily by the amount of system RAM available. The best way to understand what plan is best for your workload is to try them.
What Does Having a Cold Cache Mean?
If for some reason you experience a service disruption on your production-tier Heroku Postgres database, you can receive a message that when your database comes back online, you have a “cold cache”. When you see this message, there’s underlying hardware affected and as a result, your database comes back online on a new host where no data have been cached. To mitigate cold cache issues on a new Postgres instance, see this Knowledge Base article.
If you periodically send reads to your follower, the cache can already be warmed, reducing the time for your cache to be performing at normal levels.
If you have a follower, you can promote it when you see this message instead of waiting for your database to become available on the new host.