This add-on is operated by C2 Industries
Use Amazon S3 from your Heroku application.
Bucketeer
Last updated February 23, 2024
Table of Contents
Adding Amazon S3 to your Heroku application is painless and simple with Bucketeer.
S3 is file storage for the internet. Enjoy the centralized management and billing of Heroku while leveraging the power of one of Amazon’s most popular and dependable services.
Bucketeer manages your S3 credentials and account. You have immediate, full access to your bucket: the power of Amazon and the simplicity of Heroku.
Provisioning the add-on
Bucketeer can be attached to a Heroku application via the CLI:
A list of all plans available can be found here.
$ heroku addons:create bucketeer:hobbyist
-----> Adding test to sharp-mountain-4005... done, v18 ($5/mo)
Choosing a Bucket Name
Plans above hobbyist
can configure the name of the bucket on provisioning by including a bucket-name
parameter.
$ heroku addons:create bucketeer:standard --bucket-name mycompany.com
-----> Adding standard to sharp-mountain-4005... done, v18 ($125/mo)
This is useful if you’re planning on hosting files directly from your bucket and want more canonical URLs.
Choosing a Region
Plans above hobbyist
can configure the region of the bucket on provisioning:
$ heroku addons:create bucketeer:standard --region us-west-2
-----> Adding standard to sharp-mountain-4005... done, v18 ($125/mo)
This is useful if you’re planning on hosting files in a different AWS region than your Heroku app.
App Config
Once Bucketeer has been added, the following settings will be available in the app configuration:
BUCKETEER_AWS_ACCESS_KEY_ID
BUCKETEER_AWS_SECRET_ACCESS_KEY
BUCKETEER_AWS_REGION
BUCKETEER_BUCKET_NAME
This can be confirmed using the heroku config
command.
$ heroku config | grep BUCKETEER
After installing Bucketeer your application can interact with S3 using the IAM credentials provided to your application. This allows you to use any Amazon client, including the Amazon CLI, to interact with your bucket.
CORS Settings
Bucketeer credentials give you full access to modifying your bucket. This means that you can upload a new CORS policy at any time via an S3 client and the PUT Bucket CORS endpoint.
See example below.
Local setup
Environment setup
After provisioning the add-on it’s necessary to locally replicate the config vars so your development environment can operate against the service.
Use the Heroku Local command-line tool to configure, run and manage process types specified in your app’s Procfile. Heroku Local reads configuration variables from a .env
file. To view all of your app’s config vars, type heroku config
. Use the following command for each value that you want to add to your .env
file.
$ heroku config:get BUCKETEER_AWS_ACCESS_KEY_ID -s >> .env
$ heroku config:get BUCKETEER_AWS_SECRET_ACCESS_KEY -s >> .env
$ heroku config:get BUCKETEER_BUCKET_NAME -s >> .env
Or in one go:
$ heroku config -s | grep BUCKETEER > .env
Credentials and other sensitive configuration values should not be committed to source-control. In Git exclude the .env
file with: echo .env >> .gitignore
.
For more information, see the Heroku Local article.
Service setup
There is nothing to set up after you have provisioned Bucketeer!
Using with the AWS CLI
The AWS CLI is a great way to interact with your bucket.
Once you’ve installed it, you can configure it with the Bucketeer settings in you app config or on your SSO page.
$ aws configure
AWS Access Key ID [None]: <BUCKETEER_AWS_ACCESS_KEY_ID>
AWS Secret Access Key [None]: <BUCKETEER_AWS_SECRET_ACCESS_KEY>
Default region name [None]: us-east-1
Default output format [None]:
Don’t forget to use your bucket’s s3 url when interacting with the aws s3
commands! s3://<your-bucket-name>
Uploading files
Use the aws s3 cp
command with the bucket url to upload files.
Anything uploaded with the public/
prefix is automatically available.
Other files you will need to manually set an acl
of public
.
$ echo "<h1>hello, world</h1>" > hello.html
$ aws s3 cp hello.html s3://<BUCKETEER_BUCKET_NAME>/public/hello.html
upload: ./hello.html to s3://<BUCKETEER_BUCKET_NAME>/public/hello.html
$ curl https://<BUCKETEER_BUCKET_NAME>.s3.amazonaws.com/public/hello.html
<h1>hello, world</h1>
Listing files
$ aws s3 ls s3://<BUCKETEER_BUCKET_NAME>
PRE public/
$ aws s3 ls s3://<BUCKETEER_BUCKET_NAME>/public/
2016-06-17 22:17:28 22 hello.html
Deleting files
$ aws s3 rm s3://<BUCKETEER_BUCKET_NAME>/public/hello.html
Public Files
AWS has disabled public access to files by default for every bucket.
First, run the following command to enable public access for your bucket:
aws s3api put-public-access-block --bucket <BUCKETEER_BUCKET_NAME> --public-access-block-configuration BlockPublicAcls=TRUE,IgnorePublicAcls=TRUE,BlockPublicPolicy=FALSE,RestrictPublicBuckets=FALSE
This policy keeps public ACLs disabled and allows public access via access policies, which we create next.
Then, create a bucket policy (policy.json
) that enables public access. Here’s an example that uses a folder named public/
:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<BUCKETEER_BUCKET_NAME>/public/*"
}
]
}
Finally, use the aws cli
to upload the policy. Assuming you saved the policy as public.json
:
$ aws s3api put-bucket-policy --bucket <BUCKETEER_BUCKET_NAME> --policy file://public.json
Common Bucket Policy Changes
Below are some common policies you may want to apply to your bucket that are not related to serving public files.
Private Space VPC Configuration
The following policy json will enable your bucket to only be accessible from within a Heroku Private Space.
Using the AWS VPC ID
from the heroku spaces:peering:info
command:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VPCAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::<BUCKETEER_BUCKET_NAME>/*",
"Condition": {
"StringEquals": {
"aws:sourceVpc": "<AWS_VPC_ID>"
}
}
}
]
}
Upload that file (policy.json
) with:
$ aws s3api put-bucket-policy --bucket <BUCKETEER_BUCKET_NAME> --policy file://policy.json
Confirm the change via:
aws s3api get-bucket-policy --bucket <BUCKETEER_BUCKET_NAME> --query Policy --output text
This will restrict access to your bucket to only instances from within that VPC.
CORS settings
The AWS CLI can provide a convenient way for you to update your CORS configuration.
Note this command is in the s3api
subcommand (not s3
) and the bucket name is specified via the --bucket
flag without the s3://
prefix.
Salesforce Configuration
The following json will enable your bucket to work with the SFDC dashboard:
{
"CORSRules": [
{
"AllowedHeaders": [ "*" ],
"ExposeHeaders": ["ETag", "x-amz-meta-custom-header"],
"AllowedMethods": [
"HEAD",
"GET",
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"*"
]
}
]
}
Upload that file with:
$ aws s3api put-bucket-cors --bucket <BUCKETEER_BUCKET_NAME> --cors-configuration file://cors.json
Confirm the change via:
aws s3api get-bucket-cors --bucket <BUCKETEER_BUCKET_NAME> --output json
Note: you will eventually want to change the AllowedOrigins
parameter to the url you will be accessing the bucket from.
Using with Ruby/Rails
Ruby on Rails applications will need to add the following entry into their Gemfile
specifying the aws-sdk
client library.
gem 'aws-sdk'
Update application dependencies with bundler.
$ bundle install
Configuring the S3 Client:
In config/initializers/aws.rb
:
You can configure the Aws
object globally:
Aws.config[:credentials] = Aws::Credentials.new(
ENV['BUCKETEER_AWS_ACCESS_KEY_ID'],
ENV['BUCKETEER_AWS_SECRET_ACCESS_KEY']
)
Aws.config[:region] = 'us-east-1'
S3 = Aws::S3::Client.new
Every time you call Aws::S3::Client.new
it will be ready to talk to S3 via Bucketeer.
Alternatively, you can configure the S3 client dynamically:
class S3
def self.client
creds = Aws::Credentials.new(ENV['BUCKETEER_AWS_ACCESS_KEY_ID'], ENV['BUCKETEER_AWS_SECRET_ACCESS_KEY'])
Aws::S3::Client.new(credentials: creds, region: 'us-east-1')
end
end
This may be useful if you have multiple credentials or need to construct a client per-request.
See the S3 SDK for full reference.
Uploading Files
Once you have the client configured, you can start making requests.
Here we are storing a private object with the key hello
and a value of `world’ to S3.
Aws::S3::Client.new.put_object(
bucket: ENV['BUCKETEER_BUCKET_NAME'],
key: 'hello',
body: 'world',
)
Public files
Using the Public folder
Here we are storing a public object with the key public/index.html
so we can serve our website directly from S3. Bucketeer comes preconfigured to make files that name objects with public/
after the bucket prefix publicly available.
key = 'public/index.html'
Aws::S3::Client.new.put_object(
bucket: ENV['BUCKETEER_BUCKET_NAME'],
key: key,
body: '<h1>Hello, world</h1>',
)
Using S3 ACLs
You can also just override the default visibility on a per-object basis:
Aws::S3::Client.new.put_object(
bucket: ENV['BUCKETEER_BUCKET_NAME'],
acl: 'public',
key: 'hello',
body: 'world',
)
Using with Node.js
Node.js applications will need to add the following entry into their package.json
specifying the aws-sdk
client library.
npm install aws-sdk --save
Update application dependencies with npm.
$ npm install
Configure the S3 Client:
Globally:
process.env.AWS_ACCESS_KEY_ID = process.env.BUCKETEER_AWS_ACCESS_KEY_ID;
process.env.AWS_SECRET_ACCESS_KEY = process.env.BUCKETEER_AWS_SECRET_ACCESS_KEY;
process.env.AWS_REGION = 'us-east-1';
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
Or per-client:
var AWS = require('aws-sdk');
var s3 = new AWS.S3({
accessKeyId: process.env.BUCKETEER_AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.BUCKETEER_AWS_SECRET_ACCESS_KEY,
region: 'us-east-1',
});
And start making requests via the S3 SDK.
var params = {
Key: 'hello',
Bucket: process.env.BUCKETEER_BUCKET_NAME,
Body: new Buffer('Hello, node.js'),
};
s3.putObject(params, function put(err, data) {
if (err) {
console.log(err, err.stack);
return;
} else {
console.log(data);
}
delete params.Body;
s3.getObject(params, function put(err, data) {
if (err) console.log(err, err.stack);
else console.log(data);
console.log(data.Body.toString());
});
});
Using with other Languages
To use Bucketeer with other languages, use the Amazon SDK for that language.
Dashboard
The Bucketeer dashboard allows you to view your current plan and bucket name.
The dashboard can be accessed via the CLI:
$ heroku addons:open bucketeer
Opening bucketeer for sharp-mountain-4005
or by visiting the Heroku Dashboard and selecting the application in question. Select Bucketeer from the Add-ons menu.
Troubleshooting
Use the AWS status page to confirm “Amazon Simple Storage Service” is working properly.
Migrating between plans
There is no down-time migrating between plans. Your data is always accessible when S3 is available.
Use the heroku addons:upgrade
command to migrate to a new plan.
$ heroku addons:upgrade bucketeer:newplan
-----> Upgrading bucketeer:newplan to sharp-mountain-4005... done, v18 ($49/mo)
Your plan has been updated to: bucketeer:newplan
Removing the add-on
Bucketeer can be removed via the CLI.
This will destroy all associated data and cannot be undone!
$ heroku addons:destroy bucketeer:test
-----> Removing bucketeer:test from sharp-mountain-4005... done, v20 (free)
Before removing Bucketeer a data export can be performed by using your S3 client to download your data.
Support
All Bucketeer support and runtime issues should be submitted via one of the Heroku Support channels. Any non-support related issues or product feedback is welcome at bucketeer@c2industries.com.
Further Reading
- Bucketeer Examples Code examples of configuration and interacting with S3 from Ruby and Node.js.
- Uploads to S3 from Heroku Examples of uploading files to S3 in multiple languages.
- CORs Uploads to S3 from Rails. This technique allows you to bypass uploading files to your rails servers, which then have to upload to S3. Instead, you can upload to S3 directly from the browser. Although the server-side code is Rails, the Javascript technique is portable.