Writing to Application Logs as an Add-on Partner
Last updated December 03, 2024
Add-on Partners can improve the development experience of their add-on by inserting data into the app’s log stream. This article explains how to connect your service to an app’s log stream via Logplex.
To read application logs as an add-on, see this documentation instead.
Writing to app logs is unsupported for Fir-generation apps. See Add-ons Generation Compatibility Guide for Partners for details.
Setup
To gain access to the app’s Logplex endpoint, which gives you the capability to write to an app’s log stream, you must first store its URL submitted in the add-on provision request. After adding the "log_input"
value to your manifest’s requires
field, the URL is available in the initial provision request and in the App Info API. See more details of the provision request in the Add-on Partner API Spec. If you didn’t previously store the log_input_url
from the provision request, use the App Info API to query for it.
Transport
You must deliver data to Logplex via HTTP. Using keep-alive
connections and dense payloads, you can efficiently deliver all of your customer’s logs to Logplex via HTTP. Here’s a cURL example demonstrating a simple HTTP request:
$ URL="https://token:t.01234567-89ab-cdef-0123-456789abcdef@1.us.logplex.io/logs"
$ curl $URL \
-d "89 <190>1 2013-03-27T20:02:24+00:00 01234567-89ab-cdef-0123-456789abcdef app procid - - foo89 <190>1 2013-03-27T20:02:24+00:00 01234567-89ab-cdef-0123-456789abcdef app procid - - bar" \
-X "POST" \
-H "Content-Length: 130" \
-H "Content-Type: application/logplex-1" \
-A 'MyAddonName (https://elements.heroku.com/addons/my-addon-name; Ruby/2.1.2)'
Make sure that you set a User-Agent
header that includes a string that uniquely identifies your add-on.
The URL in the example is purely illustrative. This URL can vary from app to app and always hasdifferent credentials. You must store these URLs for each installed add-on that requires writing logs or fetching them from the App Info API as needed.
The log_input_url
field for a given app is subject to change, for instance, if credentials must be rotated. If your HTTP request gets a 4xx
response code, see if the log_input_url
for the app changed using the App Info API. If so, update your record for the app and retry.
Format
The headers of the request contains Content-Length
& Content-Type
. The value of Content-Length
is the integer value of the byte length of the body. The value of Content-Type
is application/logplex-1
.
The body of the HTTP request contains length-delimited syslog packets. Syslog packets are defined in RFC5424. The following line summarizes the RFC protocol:
<prival>version time hostname appname procid msgid structured-data msg
You can use <190>
(local7.info) for the prival
and 1
for the version.
The time
field is set to when the logline was created, in rfc3339 format.
The hostname
is set to the ID of the app that the add-on is associated with.
The appname
field is set to app
.
Logplex doesn’t use theprocid
, but it’s a great way to identify which process is emitting the logs. For example, heroku-postgres
.
Similarly, Logplex doesn’t use the msgid
or structured-data
and uses the value -
instead.
Finally, the msg
is the section of the packet where the log message stores.
Message Conventions
Format your log messages to optimize for both human readability and machine parsability. Write messages that:
- Consist of a single message
- Use key-value pairs of the format
status=delivered
- Use a
source
key-value pair in log lines for distinguishing machines or environments. For example,source=us-east measure#web.latency=4ms
. - Show hierarchy with dots, from least to most specific, as in
measure#queue.backlog=
- Units must immediately follow the number and must only include
a-zA-Z
. For example,10ms
.
High-Frequency Events
These log events benefit from statistical aggregation by a log consumer:
measure#elb.latency.sample_count=67448s source=elb012345.us-east-1d
Pre-Aggregated Statistics
Periodically, your service can inject pre-aggregated metrics into a user’s logstream. The event can include the total number of database tables, active connections, cache usage:
sample#tables=30 sample#active-connections=3
sample#cache-usage=72.94 sample#cache-keys=1073002
Our experience logging for Postgres suggests that once a minute is a reasonable frequency for reporting aggregate metrics. Any higher frequency is potentially too noisy or expensive for storage/analysis. Be mindful of this when choosing to periodically log metrics from your add-on.
Incremental Counters
count#db.queries=2 count#db.snapshots=10