Multi-Tenant Apache Kafka on Heroku
Last updated January 25, 2021
Table of Contents
Apache Kafka on Heroku is an add-on that provides Kafka as a service, with full integration into the Heroku platform. This article describes Heroku’s multi-tenant Kafka Basic plans, which offer a more accessible entry point relative to dedicated cluster plans.
The primary Kafka Dev Center article covers core Kafka concepts, including the setup of a Kafka development environment.
With Basic plans, your Heroku app shares a Kafka cluster with other Heroku apps, and each app has secure, exclusive access to its own set of topics distributed across the cluster.
Basic plans are optimized for the following use cases:
- Experimentation and prototyping. Many developers need an option that allows them to learn Kafka, and to experiment with its behavior to assess whether it’s a good option for their application architecture.
- Development and testing. Basic instances are provisioned quickly, which makes them well suited for development and testing environments.
- Lower-capacity production use. Basic plans can provide sufficient capacity for smaller production use cases that do not require a dedicated cluster.
Please see this section on preparing your development environment if this is your first time using Apache Kafka on Heroku.
Heroku offers the following Basic plans:
|Plan Name||Cluster Produce Quota (kb/s)||Cluster Consume Quota (kb/s)||Capacity (GB)||Max Partitions Available|
These plans vary primarily on the following dimensions:
- Produce and consume quotas (throughput rate)
- Capacity (the total amount of data that can be retained at a time)
- The maximum number of partitions allowed per topic
If there are parameters that do not fit your usage needs, please let us know what those needs are, and also consider the dedicated cluster Kafka plans.
Common defaults for Basic plans:
- Max Number of Partitions per Topic: 256
- Max Number of Topics per Add-On: 40
- Min (and Default) Retention: 1 Day
- Max Message Size: 1MB
- Max Retention: 7 Days
- Default Number of Partitions: 8
The Basic plans are available in the Common Runtime (US and EU), and in those Private Space Regions that overlap with those Common Runtime locations (Virginia and Dublin). Please note that when provisioned from a Private Space, the Basic plans are still multi-tenant, and are not themselves within the Private Space.
When a Basic add-on is provisioned, it receives a set of produce and consume quotas (throughput rate limits), which are based on the plan level.
If the capacity for the add-on is exceeded, the produce and consume quotas are reduced to throttle throughput, in order to bring capacity back in line with plan limits. Once the add-on is back within its capacity limit, the baseline quotas for the plan are restored.
Quotas are enforced on a cluster-wide basis. Because of this, each Kafka broker is allocated one eighth of your add-on’s total quota. Therefore, to saturate the quota allocated to your add-on, a topic must have at least eight partitions, with roughly even throughput across those partitions.
Advanced users might find it valuable to inspect the throttling metadata provided in the
ProduceResponse to avoid timeouts.
Limitations on topic configuration
When using a Basic plan, all topics have time-based retention enabled. This applies even to topics that have compaction enabled (both retention rules apply).
Differences to dedicated Kafka plans
Connecting: Kafka prefix
The process for connecting to a multi-tenant Kafka cluster is virtually the same as the one for connecting to a dedicated cluster, with one exception: all Kafka Basic topics and consumer groups begin with a unique prefix that is associated with your add-on.
This prefix is accessible via the
KAFKA_PREFIX config var.
When configuring your Kafka client, you should obtain the prefix dynamically from your code, rather than hardcoding it. This ensures that code changes aren’t necessary when deploying your code to different environments with different add-ons. The value of the
KAFKA_PREFIX config var can change, and should be treated accordingly.
Without the prefix, consumers will not receive messages, and errors like Broker: Topic authorization failed or Broker: Group authorization failed may appear in Kafka debug events.
Kafka contains primitives for both independent consumers and consumer groups. Although consumer groups are not required to use Kafka, it is worth noting that an additional step is required in order to use consumer groups with Basic plans. Consumer groups must be created before use, and they are prefixed (using
KAFKA_PREFIX) in the same manner as topics:
$ heroku kafka:consumer-groups:create my-consumer-group
Your application code must use the prefixed version of the consumer group name. Non-prefixed references to the consumer group will not work.
You can investigate which consumer groups are active with the following command:
$ heroku kafka:consumer-groups
You can delete a consumer group if you no longer need it:
$ heroku kafka:consumer-groups:destroy my-consumer-group
The following commands are not available in Basic plans:
Basic and the dashboard
Basic plans provide insights via the dashboard available at data.heroku.com. You can view metrics for your add-ons, create topics, and delete topics.
Heroku performs regular maintenance on the multi-tenant Kafka clusters to update and refresh underlying software. These maintenance windows should be transparent to your application, involving a few seconds of increased latency and errors when contacting individual brokers. However, some applications might experience more issues with this kind of maintenance. For more on building robust applications, see this article.
Each Basic add-on shares a Kafka cluster with a number of other tenants. Each tenant is provided its own CA and Client Certificates (by which data is encrypted in transit), and a set of ACLs that provide secure access to only the set of topics designated by the namespace (the
KAFKA_PREFIX associated with an add-on). Data is encrypted at rest at the volume level.