Using AWS S3 to Store Static Assets and File Uploads
Last updated 15 November 2017
AWS Simple Storage Service, e.g. S3, is a “highly durable and available store” and can be used to reliably store application content such as media files, static assets and user uploads. It allows you to offload your entire storage infrastructure and offers better scalability, reliability, and speed than just storing files on the filesystem.
AWS S3, or similar storage services, are important when architecting applications for scale and are a perfect complement to Heroku’s ephemeral filesystem.
S3 is a different kind of file service and has different semantics from other file-based services.
All files in S3 are stored in buckets which act as a top-level container much like a directory. All files sent to S3 belong to a bucket and bucket names must be unique across the whole Amazon system.
Access to the S3 API is governed by an Access Key ID and a Secret Access Key. The access key identifies your S3 user account while the secret key is akin to a password and should be kept secret.
Enabling an application to use S3 requires that the application have access to the AWS credentials as well as the name of the bucket to store files.
Because of the sensitive nature of the S3 credentials they should never be stored in a file or committed to source control. On Heroku such information is stored as application config vars. Use
heroku config:set to set both keys.
$ heroku config:set AWS_ACCESS_KEY_ID=xxx AWS_SECRET_ACCESS_KEY=yyy Adding config vars and restarting app... done, v21 AWS_ACCESS_KEY_ID => xxx AWS_SECRET_ACCESS_KEY => yyy
A single bucket typically stores the files, assets and uploads for an application. To create a bucket access the S3 section of the AWS Management Console and create a new bucket in the US Standard region.
Whilst you have a lot of freedom in choosing a bucket name, we suggest taking care in naming buckets for maximum interoperability. Store the bucket name in a config var to give your application access to its value.
$ heroku config:set S3_BUCKET_NAME=appname-assets Adding config vars and restarting app... done, v22 S3_BUCKET_NAME => appname-assets
S3 bucket naming has a vast impact on what features of AWS you can use, including basic features like encryption. To have maximum compatibility, the ideal bucket name has:
- No capital letters (
- No periods (
- No underscores (
-cannot appear at the beginning nor end of the bucket name
- The bucket name must be less than or equal to 32 characters long
The reasons for these suggestions on naming a bucket include:
.characters cause certificate failures when using SSL or TLS.
- Capital letters,
_, adjacency of
-, and starting or ending the bucket with
-, or using only
.and numeral characters disable use of the preferred “Virtual Hosted” API format for S3 and is incompatible with some S3 regions.
- Bucket names that are greater than 32 characters in length cannot use AWS STS policies, which provide flexible, temporary credentials.
Once uploaded your application can reference its assets by copying their public URLs (such as
http://s3.amazonaws.com/bucketname/filename) and pasting them directly into your app’s views or HTML files. These files will now be served directly from S3, freeing up your application to serve only dynamic requests.
There are two approaches to processing and storing file uploads from a Heroku app to S3: direct and pass-through.
This is the preferred approach if you’re working with file uploads bigger than 4MB. The idea is to skip the hop to your dyno, making a direct connection from the end user browser to S3. While this reduces the processing required by your application it is a more complex implementation and limits the ability to modify (transform, filter, resize etc…) the file before storing in S3.
Most of the articles listed below demonstrate how to perform direct uploads.
Pass-through uploading sends the file from the user to the application on Heroku which then uploads it to S3. Benefits of this approach include the ability to pre-process user-uploads before storing in S3. However, be careful that large files don’t tie up your application dynos during the upload process.
Large files uploads in single-threaded, non-evented environments (such as Rails) block your application’s web dynos and can cause request timeouts and H11, H12 errors. EventMachine, Node.js and JVM-based languages are less susceptible to such issues. Please be aware of this constraint and choose the right approach for your language or framework.
Here are language-specific guides to handling uploads to S3: