Last updated 29 June 2017
Expensive queries are the most significant cause of performance issues on Heroku Postgres databases. Optimizing expensive queries can yield tremendous improvements to you application’s performance and overall response times.
Heroku Postgres monitors trending information for queries on a database, tracking the queries executed per minute, average time, and total I/O time.
Expensive queries are not supported on the Hobby tier.
To view the expensive queries for your database, navigate to your database list and scroll down to the “Expensive Queries” section. A graph of the trending information for a query can be viewed by clicking on the query. The timezone on the graph is your local browser time. One week of performance information is stored.
If the slow query uses bind params
$1 or query params
?, this page will not show you the value of those params. Performance can vary wildly between different values for a query. To find the slowest params, Heroku outputs the slowest queries (that take 2 seconds and greater) and their params to your application’s logs if you have slow query logs enabled. You can use a logging addon to search for these slow queries.
Jun 26 08:49:40 issuetriage app/postgres.29339: [DATABASE] [41-1] LOG: duration: 2406.615 ms execute <unnamed>: SELECT "issues".* FROM "issues" WHERE "issues"."repo_id" = $1 AND "issues"."state" = $2 ORDER BY created_at DESC LIMIT $3 OFFSET $4 Jun 26 08:49:40 issuetriage app/postgres.29339: [DATABASE] [41-2] DETAIL: parameters: $1 = '1348', $2 = 'open', $3 = '20', $4 = '760'
This query took 2.4 seconds to execute and you can see that the parameters used
1348 for the
open for the
state, a limit of
20 and an offset of
760. You can use this information along with
EXPLAIN ANALYZE to profile why a query is slow.
Causes of expensive queries
The most common causes of expensive queries are:
- Lack of indexes, causing slow lookups on large tables
- Unused indexes, causing slow
- Inefficient schema leading to bad queries
- Queries with inefficient designs
- Large databases size or lock contention, causing slow
COPYoperations (usually used for logical backups).
Solutions to expensive queries
These are some guidelines that may help to fix expensive queries:
EXPLAIN ANALYZE(though pg:psql) to find out what is taking up most of the execution time of the query. A sequential scan on a large table is typically, but not always, a bad sign. Efficient indexes can improve query performance dramatically. Consider all Postgres techniques such as partial indexes and others when devising your index strategy.
- Look for unused indexes by running
heroku pg:diagnose. Drop any that are not required.
- Upgrade your database to the latest version: Postgres performance is known to improve on every release.
- For large databases, prefer relying on our continuous protection for day to day disaster recovery purposes. Remove any auto
pg:backupsplans and use pg:backups strictly for extracting or migrating data.
- For smaller databases, slow logical backups can be a result of lock contention.
pg:outliersfrom Heroku pg-extras to find queries using a high proportion of execution time.