Choosing the Right Heroku Postgres Plan

Last Updated: 13 May 2015

database plans postgres

Table of Contents

Heroku Postgres offers a wide spectrum of plans appropriate for everything from personal blogs all the way to large-dataset and high-transaction applications. Choosing the right plan depends on the unique usage characteristics of your app as well as your organization’s availability and uptime expectations.

Plan tiers

If you’re on one of our legacy plans you can still provision and use those. Details of those plans can be found in Heroku Postgres Legacy Plans. Although, we strongly recommend that you use a current production tier plan because they’re more performant when compared to legacy plans.

Heroku Postgres’s many plans are segmented in four broad tiers. While each tier has a few differences, the key factor in each tier is the uptime expectation for your database. The four tiers are designed as:

  • Hobby Tier designed for apps that can tolerate up to 4 hrs of downtime
  • Standard Tier designed for apps that can tolerate up to 1 hr of downtime
  • Premium Tier designed for apps that can tolerate up to 15 minutes of downtime
  • Enterprise Tier designed for apps where an SLA is needed

All uptime expectations are given based on a 30 day month

For a full breakdown of the differences between tiers:

Heroku Postgres tier Downtime Tolerance Fork Follow Rollback HA Disk Encryption SLA
Hobby < 4 hr downtime per mo. No No No No No No
Standard < 1 hr downtime per mo. Yes Yes 1 hour No No No
Premium < 15 min downtime per mo. Yes Yes 1 week Yes Yes No
Enterprise < 15 min downtime per mo. Yes Yes 1 month Yes Yes Yes

Shared features

All tiers share the following features:

  • Fully managed database service with automatic health checks
  • Write-ahead log (WAL) off-premise storage every 60 seconds, ensuring minimal data loss in case of catastrophic failure
  • Daily logical database backups with PG Backups (optional but free)
  • Data clips for easy and secure sharing of data and queries
  • SSL-protected psql/libpq access
  • Running unmodified Postgres 9.3 (see Heroku Postgres Legacy Plans for 9.0, 9.1 and 9.2)
  • Postgres extensions
  • A full-featured web UI

Hobby tier

The hobby tier, which includes the hobby-dev and hobby-basic plans, has the following limitations:

  • Enforced row limits of 10,000 rows for hobby-dev and 10,000,000 for hobby-basic plans
  • Max of 20 connections
  • No in-memory cache: The lack of an in-memory cache limits the performance capabilities since the data can’t be accessed on low-latency storage.
  • No fork/follow support: Fork and follow, used to create replica databases and master-slave setups, are not supported.
  • No Expensive Queries support.
  • Expected uptime of 99.5% each month.
  • Unannounced maintenances and database upgrades.
  • No postgres logs.

Row limit enforcement

When you are over the hobby tier row limits and try to insert you will see a Postgres error:

permission denied for relation <table name>

The row limits of the hobby tier database plans are enforced with the following mechanism:

  1. When a hobby-dev database hits 7,000 rows, or a hobby-basic database hits 7 million rows , the owner receives a warning e-mail stating they are nearing their row limits.
  2. When the database exceeds its row capacity, the owner will receive an additional notification. At this point, the database will receive a 7 day grace period to either reduce the number of records, or migrate to another plan.
  3. If the number of rows still exceeds the plan capacity after 7 days, INSERT privileges will be revoked on the database. Data can still be read, updated or deleted from database. This ensures that users still have the ability to bring their database into compliance, and retain access to their data.
  4. Once the number of rows is again in compliance with the plan limit, INSERT privileges are automatically restored to the database. Note that the database sizes are checked asynchronously, so it may take a few minutes for the privileges to be restored.

Standard tier

The Standard tier is designed for production applications, where while uptime is important, are able to tolerate up to 1 hour of downtime in a given month. All standard tier databases include:

Within the Standard tier plans have differing memory, connection limits, and storage limits. The plans for the standard tier are:

Plan Name Provisioning name Cache Size Storage limit Conn. limit Monthly Price
Standard0 heroku-postgresql:standard-0 1 GB 64 GB 120 $50
Standard2 heroku-postgresql:standard-2 3.5 GB 256 GB 400 $200
Standard4 heroku-postgresql:standard-4 15 GB 512 GB 500 $750
Standard6 heroku-postgresql:standard-6 60 GB 1 TB 500 $2000
Standard7 heroku-postgresql:standard-7 120 GB 1 TB 500 $3500

Premium tier

The Premium tier is designed for production applications, where while uptime is important, are able to tolerate up to 15 minutes of downtime in a given month. All premium tier databases include:

  • No row limitations
  • Increasing amounts of in-memory cache
  • Fork and follow support
  • Rollback
  • Database metrics published to application log stream for further analysis
  • Priority service restoration on disruptions
  • Automatic encryption-at-rest of all data written to disk.

Within the premium tier plans have differing memory, connection limits, and storage limits. The plans for the premium tier are:

Plan Name Provisioning name Cache Size Storage limit Connection limit Monthly Price
Premium0 heroku-postgresql:premium-0 1 GB 64 GB 120 $200
Premium2 heroku-postgresql:premium-2 3.5 GB 256 GB 400 $350
Premium4 heroku-postgresql:premium-4 15 GB 512 GB 500 $1200
Premium6 heroku-postgresql:premium-6 60 GB 1 TB 500 $3500
Premium7 heroku-postgresql:premium-7 120 GB 1 TB 500 $6000

Cache size

Each production tier plan’s RAM size constitutes the total amount of System Memory on the underlying instance’s hardware, most of which is given to Postgres and used for caching. While a small amount of RAM is used for managing each connection and other tasks, Postgres will take advantage of almost all this RAM for its cache. Learn more about how this works in this article

Postgres constantly manages the cache of your data: rows you’ve written, indexes you’ve made, and metadata Postgres keeps. When the data needed for a query is entirely in that cache, performance is very fast. Queries made from cached data are often 100-1000x faster than from the full data set.

99% or more of queries served from well engineered, high performance web applications will be served from cache.

Conversely, having to fall back to disk is at least an order of magnitude slower. Additionally, columns with large data types (e.g. large text columns) are stored out-of-line via TOAST, and accessing large amounts of TOASTed data can be slow.

General guidelines

Access patterns vary greatly from application to application. Many applications only access a small, recently-changed portion of their overall data. Postgres can always keep that portion in cache as time goes on, and as a result these applications can perform well on smaller plans.

Other applications which frequently access all of their data don’t have that luxury and can see dramatic increases in performance by ensuring that their entire dataset fits in memory. To determine the total size of your dataset use the heroku pg:info command and look for the Data Size row:

$ heroku pg:info
Plan:        Crane
Status:      available
Data Size:   9.4 MB

Though a crude measure, choosing a plan that has at least as much in-memory cache available as the size of your total dataset will ensure high cache ratios. However, you will eventually reach the point where you have more data than the largest plan, and you will have to shard. Plan ahead for sharding: it takes a long time to execute a sharding strategy.

Determining required cache-size

There is no substitute for observing the database demands of your application with live traffic to determine the appropriate cache-size. Cache hit ratio should be in the 99%+ range. Uncommon queries should be less than 100ms and common ones less than 10ms.

This blog post includes a deeper discussion of Postgres performance concerns and techniques.

To measure the cache hit ratio for tables:

    'cache hit rate' AS name,
     sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS ratio
FROM pg_statio_user_tables;

or the cache hit ratio for indexes:

    'index hit rate' AS name,
    (sum(idx_blks_hit)) / sum(idx_blks_hit + idx_blks_read) AS ratio
FROM pg_statio_user_indexes

You can also install the pg extras plugin and then simply run heroku pg:cache_hit.

Both queries should indicate a ratio near 0.99:

heap_read | heap_hit |         ratio
       171 |   503551 | 0.99966041175571094090

When the cache hit ratio begins to decrease, upgrading your database will generally put it back in the green. The best way is to use the fast-changeover technique to move between plans, watch New Relic, and see what works best for your application’s access patterns.