Understanding Heroku Postgres Log Statements and Common Errors
Last updated 21 June 2016
Table of Contents
- LOG: duration: 3.565 s …
- LOG: checkpoint starting…
- LOG: could not receive data from client: Connection reset by peer
- LOG: unexpected EOF on client connection
- FATAL: too many connections for role
- FATAL: could not receive data …
- FATAL: role “role-name"…
- FATAL: terminating connection due to administrator command
- FATAL: remaining connection slots are reserved for non-replication superuser connections
- FATAL: no pg_hba.conf entry for host "…”, user “u…”, database “d…”, SSL off
- temporary file: path “file path”, size “file size”
- PGError: permission denied for relation
- PGError: operator does not exist
- PGError: relation “table-name” does not exist
- PGError: column “column-name” cannot…
- PGError: SSL SYSCALL error: EOF detected
- PGError: prepared statement “a30” already exists
- This database does not support forking and following
- Out of memory errors
Heroku Postgres logs to the logplex which collates and publishes your application’s log-stream. You can isolate Heroku Postgres events with the
heroku logs command by filtering for the
Logs are a production-tier feature. They are not available on hobby-tier databases.
$ heroku logs -p postgres -t 2012-11-01T17:41:42+00:00 app[postgres]: [15521-1] [CHARCOAL] LOG: checkpoint starting: time 2012-11-01T17:41:43+00:00 app[postgres]: [15522-1] [CHARCOAL] LOG: checkpoint complete: wrote 6 buffers (0.0%); 0 transaction log file(s) added, 0 rem...
Besides seeing system-level Postgres activity, these logs are also useful for understanding your application’s use of Postgres and for diagnosing common errors. This article lists common log statements, their purpose, and any action that should be taken.
LOG: duration: 3.565 s …
[12-1] u8akd9ajka [BRONZE] LOG: duration: 3.847 s statement: SELECT "articles".* FROM "articles"...
Queries taking longer than 2 seconds are logged so they can be identified and optimized. Although small numbers of these long-running queries will not adversely affect application performance, a large quantity may.
Ideally, frequently used queries should be optimized to require less than 10 ms to execute. Queries are typically optimized by adding indexes to avoid sequential scans of the database. Use EXPLAIN to diagnose queries.
LOG: checkpoint starting…
2012-11-01T17:41:42+00:00 app[postgres]: [15521-1] [CHARCOAL] LOG: checkpoint starting: time 2012-11-01T17:41:43+00:00 app[postgres]: [15522-1] [CHARCOAL] LOG: checkpoint complete: wrote 6 buffers (0.0%); 0 transaction log file(s) added, 0 rem...
LOG: checkpoint starting and the corresponding
LOG: checkpoint complete statements are part of Postgres' Write-Ahead Logging (WAL) functionality. Postgres automatically puts a checkpoint in the transaction log every so often. You can find more information here.
These statements are part of normal operation and no action is required.
LOG: could not receive data from client: Connection reset by peer
LOG: unexpected EOF on client connection
app[postgres]: LOG: could not receive data from client: Connection reset by peer app[postgres]: LOG: unexpected EOF on client connection heroku[router]: at=error code=H13 desc="Connection closed without response" method=GET path=/crash host=pgeof.herokuapp.com dyno=web.1 connect=1ms service=10ms status=503 bytes=0 heroku[web.1]: Process exited with status 1 heroku[web.1]: State changed from up to crashed
Although this log is emitted from Postgres, the cause for the error has nothing to do with the database itself. Your application happened to crash while connected to Postgres, and did not clean up its connection to the database. Postgres noticed that the client (your application) disappeared without ending the connection properly, and logged a message saying so.
If you are not seeing your application’s backtrace, you may need to ensure that you are, in fact, logging to stdout (instead of a file) and that you have stdout sync’d.
FATAL: too many connections for role
FATAL: too many connections for role "[role name]"
This occurs on Hobby Tier (hobby-dev and hobby-basic) plans, which have a max connection limit of 20 per user. To resolve this error, close some connections to your database by stopping background workers, reducing the number of dynos, or restarting your application in case it has created connection leaks over time. A discussion on handling connections in a Rails application can be found here.
FATAL: could not receive data …
FATAL: could not receive data from WAL stream: SSL error: sslv3 alert unexpected message
Replication from a primary database to a follower was interrupted either because of a transient network error or because SSL failed to renegotiate. This is a transient problem and Postgres should automatically recover.
You can find out the current number of commits a follower is behind by using
heroku pg:info. Each follower has a “Behind By” entry that indicates how many commits the follower is behind its master.
$ heroku pg:info --app sushi === HEROKU_POSTGRESQL_WHITE ... Following HEROKU_POSTGRESQL_LAVENDER (DATABASE_URL) Behind By 125 commits
FATAL: role “role-name"…
FATAL: role "u8akd9ajka" is not permitted to log in (PG::Error)
This occurs when you have de-provisioned a hobby tier database but are still trying to connect to it. To resolve:
- If required, provision a new database via
heroku addons:create heroku-postgresql
heroku pg:promote HEROKU_POSTGRESQL_<new-database-color>to promote it, making it the primary database for your application.
FATAL: terminating connection due to administrator command
FATAL: terminating connection due to administrator command
This message indicates a backend connection was terminated. This can happen when a user issues
pg:kill from the command line client, or similarly runs
SELECT pg_cancel_backend(pid); from a psql session.
FATAL: remaining connection slots are reserved for non-replication superuser connections
FATAL: remaining connection slots are reserved for non-replication superuser connections
Each database plan has a maximum allowed number of connections available, which varies by plan. This message indicates you have reached the maximum number allowed for your applications, and remaining connections are reserved for super user access (restricted to Heroku Postgres staff). See Heroku Postgres Production Tier Technical Characterization for details on connection limits for a given plan.
FATAL: no pg_hba.conf entry for host "…”, user “u…”, database “d…”, SSL off
You are attempting to establish a database connection without SSL. SSL is required for all connections. Please see the documentation for your postgres driver on how to establish encrypted connections.
temporary file: path “file path”, size “file size”
temporary file: path "base/pgsql_tmp/pgsql_tmp23058.672", size 1073741824
We configure Postgres to log temporary file names and sizes when the size exceeds 10240 kilobytes. Temporary files can be created when performing sorts, hashes or for temporary query results, and log entries are made for each file when it is deleted.
This log entry is just informational, as creating a large number of temporary files impacts query performance.
PGError: permission denied for relation
PGError: ERROR: permission denied for relation table-name
Heroku Postgres hobby tier databases have row limits enforced. When you are over your row limit and attempt to insert data you will see this error. Upgrade to a production tier database to remove this constraint or reduce the number of total rows.
PGError: operator does not exist
PGError: ERROR: operator does not exist: character varying = integer
Postgres is more sensitive with data types than MySQL or SQlite. Postgres will check and throw errors when an operator is applied to an unsupported data type. For instance, you can’t compare strings with integers without casting.
Make sure the operator is adequate for the data type or that the necessary type casts have been applied.
PGError: relation “table-name” does not exist
PGError: ERROR: relation "documents" does not exist
This is the standard message displayed by Postgres when a table doesn’t exist. That means your query is referencing a table that is not on the database.
Make sure your migrations ran normally, and that you’re referencing a table that exists.
PGError: column “column-name” cannot…
PGError: ERROR: column "verified_at" cannot be cast to type "date"
This occurs when Postgres doesn’t know how to cast all the row values in that table to the specified type. Most likely it means you have an integer or a string in that column.
Inspect all affected column values and manually remove or translate values that can’t be converted to the required type.
PGError: SSL SYSCALL error: EOF detected
Errors with similar root causes include:
no connection to the server
SSL error: decryption failed or bad record mac
could not receive data from server: Connection timed out
These errors indicate a client side violation of the wire protocol. This happens for one of two reasons:
- The Postgres connection is shared between more than one process or thread. Typical offenders are Resque workers or Unicorn. Be sure to correctly establish the PG connection after the fork or thread has initialized to resolve this issue.
- Abrupt client (application side) disconnections. This can happen for many reasons, from your app crashing, to transient network availability. When your app tries to issue a query again against Postgres, the connection is just gone, leading to a crash. When Heroku detects a crash, we kill that dyno and start a new one, which re-establishes the connection.
PGError: prepared statement “a30” already exists
This is similar to the above. There is no protocol violation, but the client is mistakenly trying to set up a prepared statement with the same name as an existing one without cleaning up the original. (The name of the prepared statement in the error will, of course, vary).
This is also typically caused by a Postgres connection shared improperly between more than one process or thread.
This database does not support forking and following
Some older Ronin and Fugu databases provisioned on a 32-bit processor architecture don’t support forking and following to current plans, all of which are 64-bit. If have one of these databases, you will see an error message such as this:
$ heroku addons:create heroku-postgresql:ika --follow HEROKU_POSTGRESQL_RED ----> Adding heroku-postgresql:ika to sushi... failed ! This database does not support forking and following to the ika plan. ! Please see http://devcenter.heroku.com/articles/unsupported-fork-follow
Your database is fine and is still supported. However if you’d like to use the fork or follow feature you will need to first create a fresh database with PG Backups from which you can then fork or follow.
Out of memory errors
Out of memory (OOM) errors typically happen when the server that is running the database cannot allocate any more memory to the database connections or its cache. Any number of issues could manifest themselves as an OOM error. Before bumping up your Heroku Postgres plan to a higher level, explore all areas of diagnosing the problem.
Extremely complex querying
Joins, sorting, and hash-based operations will put pressure on the working memory in Postgres. For any given database, the number of concurrent connections can exacerbate this problem. Sort operations are defined by
DISTINCT, and merge joins. Hash-based operations are typically a result of processing
IN subqueries, hash-based aggregations, as well as hash joins. The more complex the query, the higher the probability that the query will use many more of these operations. Using the explain plan of your query or the Heroku Postgres expensive queries functionality is a good first step to understanding the bottlenecks of your queries.
Ruby, ActiveRecord and prepared statements
Prepared statements, in and of themselves, are not a bad mechanism to use when working with Postgres. The benefit of using them is that the database can execute the statement with extremely high efficiency. Unfortunately, many ORMs as part of popular web frameworks, namely ActiveRecord in the Ruby on Rails framework, do not construct prepared statements effectively. If a query that is constructed can contain any number of parameters, multiple prepared statements will be created for what is the same logical query. For example, let’s assume that our application allows its users to select products from a product table by id. If customer one selects two products, ActiveRecord would define a query and create a prepared statement:
SELECT * FROM products WHERE id in ($1, $2);
If customer two, in the same application, for the same query, decides to select four products instead of two, a new prepared statement would be created:
SELECT * FROM products WHERE id in ($1, $2, $3, $4);
The problem with this code path in the application is our customers have some variability in how they can select data for the
IN clause. ActiveRecord will parameterize the query based on the number of items for the
IN clause. This can result in too many prepared statements getting cached, ultimately utilizing too much memory on the system. If this is happening to your application, consider disabling prepared statements.
To disable prepared statements, update your
config/database.yml file to include the
production: adapter: postgresql prepared_statements: false
Disabling prepared statements in Rails 3 requires the use of initializers. Create a file called
config/initializers/disable_prepared_statements.rb. Within the file, disable prepared statements by updating the connection options:
ActiveRecord::Base.establish_connection( ActiveRecord::Base.remove_connection.merge( :prepared_statements => false ) )
Too many connections
While Heroku Postgres has connection limits, based on the plan type, they’re meant to be a guideline. Each connection within Postgres takes up some RAM and if too many are created at any given time, that also can cause problems for the database. It is recommended that if you run into this situation, a production-grade connection pooler should be used to prevent the pressure of opening and closing connections on RAM.
Database plan is too small
In some cases, the database plan selected is too small for the workloads placed on it. If all other avenues for clearing up OOM errors have been explored, the primary database might need to be upgraded up to the next plan level. Very well engineered applications will serve most of the query results from the cache that Postgres manages. Generally, if the cache hit ratio starts to suffer, based on the queries being executed, then a higher plan should be considered.