Last updated December 12, 2025
The Heroku Data team continuously ensures that Heroku Postgres databases are configured optimally for web application workloads. You can further improve the performance of your Heroku Postgres database by understanding your specific workload and by doing performance tuning.
Choose the Right Heroku Postgres Plan
Choosing the right plan depends on the unique usage characteristics of your app as well as your organization’s availability and uptime expectations. For more information, see Choosing the Right Heroku Postgres Plan.
Essential-tier databases have variable performance characteristics by design. If your application requires predictable performance and availability, upgrade your database plan.
Track Metrics
The first step in understanding the performance of your database is to track various vital metrics, which offer an overall view of your system’s performance. We offer a number of system metrics which are output on your application logs. These metrics are tracked and alerted on appropriately. One easy way to do that is by installing a monitoring add-on to track this data over time.
Beware of Connections
With metrics in place, keep a close eye on connection count. One common issue arises due to leaking database connections, which in turn degrades the performance of your database. Some frameworks have ways to control connections (for example, ActiveRecord), but if your connection count is steadily approaching your plan limit, also consider using connection pooling.
Identify and Fix Expensive Queries
The biggest cause of database performance issues comes from under-optimized queries that apply significant pressure on the database system. To identify these queries, look at the Expensive Queries view for your database. Furthermore, we’ve configured Postgres to log queries taking longer than 2 seconds, so keep an eye on your application logs as well. After expensive queries are identified, they can often be optimized by adding an appropriate index, although sometimes the database schema must be adjusted to achieve optimal queries. Always use EXPLAIN to understand how individual queries are executed and which indexes to add or remove.
Long-running queries can also have an impact on database performance because they hold locks or otherwise overconsume server resources. You can automatically abort long-running queries at the session level by running SET statement_timeout = '20s' on each connection, or for all statements database wide by running ALTER DATABASE your-database-name SET statement_timeout='20s'. This database setting change affects all connections, including connections from analytics tools such as Dataclips.
Watch Out for Locks
Concurrent writes to the same rows cause Postgres to lock one transaction while the other completes. A large number of locks gives the impression that the database response times degrade, but the locked transactions aren’t doing work at all while they wait for the appropriate lock to be granted. In some cases, a database migration with ALTER TABLE statements seems to take a long time, but it’s just waiting for its turn to run.
To identify locks, run the heroku pg:locks command available from the pg-extras plugin. Also, Heroku logs the number of locks on your log stream via the database’s metrics logs. It’s recommended to keep an eye on it with a monitoring add-on.
Manage Database Bloat
Postgres uses MVCC to enable concurrent access to data. Concurrent access can cause dead tuples or bloat, which requires the database system to be VACUUM-ed to perform optimally. Postgres includes an autovacuum daemon, but it’s likely that you must configure more aggressive vacuum settings. Alternatively, you can run the command VACUUM ANALYZE frequently during times of least traffic to clean up bloat and refresh database statistics manually.
Consider Separate Caching
Postgres is good at caching your data, but some applications benefit from caching in a separate in-memory system such as Heroku Key-Value Store. Consider this approach if certain queries are run frequently and exhibit variable response time, which is indicative of caching effects.
Run Diagnostics
The heroku pg:diagnose command flags issues that Heroku Postgres users commonly run into. Running this command is a good way to determine if you’re running into any common operational issues with your Heroku Postgres database.