Skip Navigation
Show nav
Heroku Dev Center
  • Get Started
  • Documentation
  • Changelog
  • Search
  • Get Started
    • Node.js
    • Ruby on Rails
    • Ruby
    • Python
    • Java
    • PHP
    • Go
    • Scala
    • Clojure
  • Documentation
  • Changelog
  • More
    Additional Resources
    • Home
    • Elements
    • Products
    • Pricing
    • Careers
    • Help
    • Status
    • Events
    • Podcasts
    • Compliance Center
    Heroku Blog

    Heroku Blog

    Find out what's new with Heroku on our blog.

    Visit Blog
  • Log inorSign up
View categories

Categories

  • Heroku Architecture
    • Dynos (app containers)
    • Stacks (operating system images)
    • Networking & DNS
    • Platform Policies
    • Platform Principles
  • Command Line
  • Deployment
    • Deploying with Git
    • Deploying with Docker
    • Deployment Integrations
  • Continuous Delivery
    • Continuous Integration
  • Language Support
    • Node.js
    • Ruby
      • Working with Bundler
      • Rails Support
    • Python
      • Background Jobs in Python
      • Working with Django
    • Java
      • Working with Maven
      • Java Database Operations
      • Working with Spring Boot
      • Java Advanced Topics
    • PHP
    • Go
      • Go Dependency Management
    • Scala
    • Clojure
  • Databases & Data Management
    • Heroku Postgres
      • Postgres Basics
      • Postgres Getting Started
      • Postgres Performance
      • Postgres Data Transfer & Preservation
      • Postgres Availability
      • Postgres Special Topics
    • Heroku Data For Redis
    • Apache Kafka on Heroku
    • Other Data Stores
  • Monitoring & Metrics
    • Logging
  • App Performance
  • Add-ons
    • All Add-ons
  • Collaboration
  • Security
    • App Security
    • Identities & Authentication
    • Compliance
  • Heroku Enterprise
    • Private Spaces
      • Infrastructure Networking
    • Enterprise Accounts
    • Enterprise Teams
    • Heroku Connect (Salesforce sync)
      • Heroku Connect Administration
      • Heroku Connect Reference
      • Heroku Connect Troubleshooting
    • Single Sign-on (SSO)
  • Patterns & Best Practices
  • Extending Heroku
    • Platform API
    • App Webhooks
    • Heroku Labs
    • Building Add-ons
      • Add-on Development Tasks
      • Add-on APIs
      • Add-on Guidelines & Requirements
    • Building CLI Plugins
    • Developing Buildpacks
    • Dev Center
  • Accounts & Billing
  • Troubleshooting & Support
  • Integrating with Salesforce
  • Databases & Data Management
  • Apache Kafka on Heroku
  • Multi-Tenant Apache Kafka on Heroku

Multi-Tenant Apache Kafka on Heroku

English — 日本語に切り替える

Last updated November 29, 2022

Table of Contents

  • Basic plans
  • Differences to dedicated Kafka plans
  • Basic and the dashboard
  • Maintenance
  • Multi-Tenancy

Apache Kafka on Heroku is an add-on that provides Kafka as a service, with full integration into the Heroku platform. This article describes Heroku’s multi-tenant Kafka Basic plans, which offer a more accessible entry point relative to dedicated cluster plans.

The primary Kafka Dev Center article covers core Kafka concepts, including the setup of a Kafka development environment.

With Basic plans, your Heroku app shares a Kafka cluster with other Heroku apps, and each app has secure, exclusive access to its own set of topics distributed across the cluster.

Basic plans are optimized for the following use cases:

  • Experimentation and prototyping. Many developers need an option that allows them to learn Kafka, and to experiment with its behavior to assess whether it’s a good option for their application architecture.
  • Development and testing. Basic instances are provisioned quickly, which makes them well suited for development and testing environments.
  • Lower-capacity production use. Basic plans can provide sufficient capacity for smaller production use cases that don’t require a dedicated cluster.

See this section on preparing your development environment if it’s your first time using Apache Kafka on Heroku.

Basic plans

Heroku offers the following Basic plans:

Plan Name Cluster Produce Quota (kb/s) Cluster Consume Quota (kb/s) Capacity (GB) Max Partitions Available
basic-0 64 256 4 240
basic-1 512 2048 32 480
basic-2 4096 16384 64 960

These plans vary primarily on the following dimensions:

  • Produce and consume quotas (throughput rate)
  • Capacity (the total amount of data that can be retained at a time)
  • The maximum number of partitions allowed per topic

If there are parameters that don’t fit your usage needs, let us know what those needs are, and also consider the dedicated cluster Kafka plans.

Common defaults for Basic plans:

  • Max Number of Partitions per Topic: 256
  • Max Number of Topics per Add-On: 40
  • Min (and Default) Retention: 1 Day
  • Max Message Size: 1 MB
  • Max Retention: 7 Days
  • Default Number of Partitions: 8

The Basic plans are available in the Common Runtime (US and EU), and in those Private Space Regions that overlap with those Common Runtime locations (Virginia and Dublin). When provisioned from a Private Space, the Basic plans are still multi-tenant, and aren’t themselves within the Private Space.

 

See the documentation on Migrating between Basic and dedicated plan types.

Throughput quotas

When a Basic add-on is provisioned, it receives a set of produce and consume quotas (throughput rate limits), which are based on the plan level.

If the capacity for the add-on is exceeded, the produce and consume quotas are reduced to throttle throughput, in order to bring capacity back in line with plan limits. After the add-on is back within its capacity limit, the baseline quotas for the plan are restored.

Quotas are enforced on a cluster-wide basis. Because of this, each Kafka broker is allocated one-eighth of your add-on’s total quota. Therefore, to saturate the quota allocated to your add-on, a topic must have at least eight partitions, with roughly even throughput across those partitions.

Advanced users can find it valuable to inspect the throttling metadata provided in the ProduceResponse to avoid timeouts.

Limitations on topic configuration

When using a Basic plan, all topics have time-based retention enabled. This applies even to topics that have compaction enabled (both retention rules apply).

Differences to dedicated Kafka plans

Connecting: Kafka prefix

The process for connecting to a multi-tenant Kafka cluster is virtually the same as the one for connecting to a dedicated cluster, with one exception: all Kafka Basic topics and consumer groups begin with a unique prefix that is associated with your add-on. This prefix is accessible via the KAFKA_PREFIX config var.

When configuring your Kafka client, obtain the prefix dynamically from your code, rather than hardcoding it. This ensures that code changes aren’t necessary when deploying your code to different environments with different add-ons. The value of the KAFKA_PREFIX config var can change, and must be treated accordingly.

Without the prefix, consumers won’t receive messages, and errors like Broker: Topic authorization failed or Broker: Group authorization failed appears in Kafka debug events.

Consumer groups

Kafka contains primitives for both independent consumers and consumer groups. Although consumer groups aren’t required to use Kafka, it’s worth noting that an additional step is required in order to use consumer groups with Basic plans. Consumer groups must be created before use, and they’re prefixed (using KAFKA_PREFIX) in the same manner as topics:

$ heroku kafka:consumer-groups:create my-consumer-group

Your application code must use the prefixed version of the consumer group name. Non-prefixed references to the consumer group won’t work.

You can investigate which consumer groups are active with the following command:

$ heroku kafka:consumer-groups

You can delete a consumer group if you no longer need it:

$ heroku kafka:consumer-groups:destroy my-consumer-group

Unavailable features

The following commands aren’t available in Basic plans:

  • heroku kafka:fail
  • heroku kafka:upgrade

Basic and the dashboard

Basic plans provide insights via the dashboard available at data.heroku.com. You can view metrics for your add-ons, create topics, and delete topics.

Maintenance

Heroku performs regular maintenance on the multi-tenant Kafka clusters to update and refresh underlying software. These maintenance windows are transparent to your application, involving a few seconds of increased latency and errors when contacting individual brokers. However, some applications experience more issues with this kind of maintenance. For more on building robust applications, see this article.

Multi-Tenancy

Each Basic add-on shares a Kafka cluster with a number of other tenants. Each tenant is provided its own CA and Client Certificates (by which data is encrypted in transit), and a set of ACLs that provide secure access to only the set of topics designated by the namespace (the KAFKA_PREFIX associated with an add-on). Data is encrypted at rest at the volume level.

Keep reading

  • Apache Kafka on Heroku

Feedback

Log in to submit feedback.

Robust Usage of Apache Kafka on Heroku Reference Architecture: Event-Driven Microservices with Apache Kafka

Information & Support

  • Getting Started
  • Documentation
  • Changelog
  • Compliance Center
  • Training & Education
  • Blog
  • Podcasts
  • Support Channels
  • Status

Language Reference

  • Node.js
  • Ruby
  • Java
  • PHP
  • Python
  • Go
  • Scala
  • Clojure

Other Resources

  • Careers
  • Elements
  • Products
  • Pricing

Subscribe to our monthly newsletter

Your email address:

  • RSS
    • Dev Center Articles
    • Dev Center Changelog
    • Heroku Blog
    • Heroku News Blog
    • Heroku Engineering Blog
  • Heroku Podcasts
  • Twitter
    • Dev Center Articles
    • Dev Center Changelog
    • Heroku
    • Heroku Status
  • Facebook
  • Instagram
  • Github
  • LinkedIn
  • YouTube
Heroku is acompany

 © Salesforce.com

  • heroku.com
  • Terms of Service
  • Privacy
  • Cookies
  • Cookie Preferences