Apache Kafka on Heroku Add-on Migration
Last updated January 25, 2021
Table of Contents
Scaling up or down between plan levels of Apache Kafka on Heroku is normally seamless and performed in-place. However, there are a few circumstances when actual data migration is required. This document provides an overview of those conditions and the applicable processes.
When do I need to migrate between Kafka add-ons?
There are 3 cases when migrating between Kafka add-ons is necessary:
- You have a multi-tenant Kafka (Kafka Basic) add-on and you want to start using a dedicated Kafka add-on.
- You have a dedicated Kafka add-on and you want to start using a multi-tenant Kafka (Kafka Basic) add-on.
- You have a Beta multi-tenant Kafka (Kafka Basic) add-on and the cluster that hosts the add-on is reaching end-of-life.
How do I handle the migration?
In many scenarios, your application might be able to enter a maintenance window and migrate to a new add-on without modifying your application’s code. In general, we recommend entering a maintenance window if you can, because it drastically reduces the complexity of the migration and does not require significant changes to your app.
If your application cannot enter a maintenance window, you need to migrate to a new add-on by double-writing to both sets of topics, and cutting over from the old add-on to the new one after the new add-on has received writes for a time period longer than your retention time.
How do I migrate between add-ons while in a maintenance window?
The high-level steps for migrating during a maintenance window are:
- Provision the new add-on with all relevant topics and consumer groups.
- Enter your maintenance window.
- Stop your Kafka producers.
- Ensure your Kafka consumers are fully caught up.
- Switch over to the new add-on.
- Start your Kafka producers and consumers.
- Exit your maintenance window.
$ heroku addons:create heroku-kafka:basic-0 --as NEW_KAFKA -a mackerel
$ heroku kafka:topics:create my-topic-name NEW_KAFKA -a mackerel
$ heroku kafka:consumer-groups:create my-group-name KAFKA -a mackerel
$ heroku ps:scale producer=0 -a mackerel
# check consumers
heroku ps:scale consumer=0 -a mackerel
heroku maintenance:on -a mackerel
# kafka-parallel-2019
heroku addons:attach kafka-symmetrical-26061 --as OLD_KAFKA -a mackerel
$ heroku addons:attach kafka-parallel-2019 --as KAFKA -a mackerel
$ heroku ps:scale producer=1 consumer=1 -a mackerel
$ heroku addons:destroy kafka-symmetrical-26061 -a mackerel
How do I migrate between add-ons without entering a maintenance window?
The high-level steps for migrating without entering a maintenance window are:
- Prepare your app for double-write.
- Provision the new Kafka add-on with all relevant topics and consumer groups.
- Double-write to both the old and the new add-ons.
- Wait for the new add-on to contain the same historical data as the old add-on.
- Stop producing to the old add-on.
- Destroy the old add-on.
These steps are described in greater detail below.
Step 1: Prepare your app for double-write
Your app needs to support two sets of Kafka config vars (one for each add-on).
This example uses KAFKA_URL
, KAFKA_CLIENT_CERT
, KAFKA_CLIENT_CERT_KEY
, and KAFKA_TRUSTED_CERT
for the old Kafka add-on before double-writing begins, and it uses them for the new Kafka add-on after double-writing begins.
This example uses OLD_KAFKA_URL
, OLD_KAFKA_CLIENT_CERT
, OLD_KAFKA_CLIENT_CERT_KEY
and OLD_KAFKA_TRUSTED_CERT
for the old Kafka add-on after double-writing begins. This set of config vars exists only while double-writing is taking place.
Two additional config vars are required, which tell producers and consumers where to write to and read from:
PRODUCER_ADDON_NAMES
is used by producers to discover which add-on(s) to write to.CONSUMER_ADDON_NAME
is used by consumers to discover which add-on to read from.
You need to add support to your app for:
- Producing to all add-ons specified in
PRODUCER_ADDON_NAMES
- Consuming from the add-on specified in
CONSUMER_ADDON_NAME
Consumers should handle duplicate messages idempotently. For more information on this, please see the article on robust usage of Apache Kafka on Heroku.
Step 2: Provision the new add-on
Before provisioning the new add-on, attach your existing Kafka add-on with a new name in preparation:
$ heroku addons:attach kafka-symmetrical-26061 --as OLD_KAFKA -a mackerel
$ heroku addons:create heroku-kafka:basic-0 --as KAFKA -a mackerel
Step 3: Create topics and consumer groups on the new add-on
Get a list of topics and consumer groups from your old add-on:
$ heroku kafka:topics OLD_KAFKA -a mackerel
$ heroku kafka:consumer-groups OLD_KAFKA -a mackerel
Now, you can create those topics and consumer groups on your new add-on:
$ heroku kafka:topics:create my-topic-name KAFKA -a mackerel
$ heroku kafka:consumer-groups:create my-group-name KAFKA -a mackerel
Step 4: Double-write to the old and new add-ons
Your app should produce to both sets of topics and consume from the old add-on’s topics while the new add-on’s topics fill with data:
$ heroku config:set PRODUCER_ADDON_NAMES=OLD_KAFKA,KAFKA -a mackerel
$ heroku config:set CONSUMER_ADDON_NAME=OLD_KAFKA -a mackerel
Step 5: Wait for the new add-on to contain enough historical data
After the new add-on has been receiving writes for longer than your retention time, both add-ons should represent the same data. This means you can switch your consumers from the old add-on to the new add-on:
$ heroku config:set CONSUMER_ADDON_NAME=KAFKA -a mackerel
Step 6: Stop producing to the old add-on
When you are comfortable consuming from the new add-on, you can stop producing to the old add-on:
$ heroku config:set PRODUCER_ADDON_NAMES=KAFKA -a mackerel
Step 7: Destroy the old add-on
Because your app is no longer consuming from the old add-on, it is safe to destroy it:
$ heroku addons:destroy OLD_KAFKA_URL -a mackerel