Expensive Queries

Last Updated: 13 May 2015

Table of Contents

Expensive queries are the most significant cause of performance issues on Heroku Postgres databases. Optimizing expensive queries can yield tremendous improvements to you application’s performance and overall response times.

Heroku Postgres monitors trending information for queries on a database, tracking the queries executed per minute, average time, and total IO time.

Expensive Queries functionality is supported on Heroku Postgres database versions greater than 9.2. It is also not supported on the Hobby tier.

To view the expensive queries for your database, navigate to your database list and scroll down to the “Expensive Queries” section. A graph of the trending information for a query can be viewed by clicking on the query. The timezone on the graph is your local browser time. One week of performance information is stored.

Expensive Queries Screenshot

Causes of Expensive Queries

The most common causes of expensive queries are:

  • Lack of indexes, causing slow lookups on large tables,
  • Unused indexes, causing slow INSERT, UPDATE and DELETE operations,
  • Inefficient schema leading to bad queries
  • Queries with inefficient designs
  • Large databases size or lock contention, causing slow COPY operations (usually used for logical backups).

Solutions to Expensive Queries

These are some guidelines that may help fixing expensive queries:

  • Run EXPLAIN ANALYZE (though pg:psql) to find out what is taking up most of the execution time of the query. A sequential scan on a large table is typically, but not always, a bad sign. Efficient indexes can improve query performance dramatically. Consider all Postgres techniques such as partial indexes and others when devising your index strategy.
  • Look for unused indexes by running heroku pg:diagnose. Drop any that are not required.
  • Upgrade your database to the latest version: Postgres performance is known to improve on every release.
  • For large database, prefer relying on our continuous protection for day to day disaster recovery purposes. Remove any auto pg:backups plans and use pg:backups strictly for extracting or migrating data.
  • For smaller database, slow logical backups can be a result of lock contention.