Heroku Redis Technical Characterization
Last updated 09 September 2015
All the information in this document is subject to change as Heroku adapts the service to better handle customer workloads.
The Heroku Redis Hobby, and Premium tier plans offer different performance characteristics based on their multitenancy, CPU, RAM and I/O architectures. This article provides a technical description of the implementation of these plans, and some of the performance characteristics of each.
The following table outlines our plans along with relevant specifications about the underlying hardware.
Premium plans are encrypted at rest by using AES-256, block-level storage encryption.
Heroku Redis instances currently run on virtualized infrastructure provided by AWS EC2. Higher level Heroku Redis plans benefit from higher levels of resource isolation than lower level plans.
There are two main variants of deployment architectures on Heroku Redis: multi-tenant and single-tenant.
For multi-tenant plans, several LXC containers are created within a single large instance. Each LXC container holds a single Redis service and provides security and resource isolation within the instance.
Resource isolation and sharing on multi-tenant plans is imperfect and absolutely fair resource distribution between tenants cannot be guaranteed under this architecture.
For single-tenant plans, a customer’s Redis instance and related management software are the sole residents of resources on the instance, offering more predictable and less variable performance. However, virtualized infrastructure is still subject to some resource contention and minor performance variations are expected.
Architecture, vCPU, RAM and I/O
All Heroku Redis plans run on 64-bit architectures, ensuring best performance for Redis operations.
vCPU are the number of virtual processors on the underlying instance. Having a larger number of vCPUs provides better performance on the virtual server or instance.
All instances are backed by EBS optimized instances where EBS disks with provisioned IOPs (PIOPs) are attached. PIOPs are a measure of how many I/O operations the underlying disks can perform per second. The amount of IOPs provisioned for each plan determines its I/O throughput.