Troubleshooting Memory Issues in Java Applications

Last Updated: 28 October 2013

java play

Table of Contents

Tuning the memory use of your application requires understanding both how Java uses memory and how you can gain visibility into your application’s memory use.

If you have questions about Java on Heroku, consider discussing them in the Java on Heroku forums.

JVM memory usage

The JVM uses memory in a number of different ways. The primary, but not singular, use of memory is in the heap. Outside of the heap memory is also consumed in Permanent Generation (Perm Gen), and the stack.

Java Heap - The heap is where your Class instantiations or “Objects” are stored. Instance variables are stored in Objects. When discussing Java memory and optimization we most often discuss the heap because we have the most control over it and it is where Garbage Collection (and GC optimizations) take place. Heap size is controlled by the -Xms and -Xmx JVM flags. Read more about GC and The Heap

Java Stack - Each thread has its own call stack. The stack stores primitive local variables and object references along with the call stack (method invocations) itself. The stack is cleaned up as stack frames move out of context so there is no GC performed here. The -Xss JVM option controls how much memory gets allocated for each thread’s stack.

Perm Gen - Permanent Generation space stores the Class definitions of your Objects. The JVM will also store some internal objects and compiler optimization information in Perm Gen. The size of Perm Gen is controlled by setting -XX:MaxPermSize. Read more about The Permanent Generation

Additional JVM overhead - In addition to the above values there is some memory consumed by the JVM itself. This holds the C libraries for the JVM and some C memory allocation overhead that it takes to run the rest of the memory pools above. Visibility tools that run on the JVM won’t show this overhead so while they can give an idea of how an application uses memory they can’t show the total memory use of the JVM process.

Profiling memory use of a Java application

It is important to understand how an application will use memory in both a development and production environment. The majority of memory issues can be reproduced in any environment without significant effort. It is often easier to troubleshoot memory issues on your local machine because you’ll have access to more tools and won’t have to be as concerned with side effects that monitoring tools may cause.

There are a number of tools available for gaining insight into Java application memory use. Some are packaged with the Java runtime itself so should already be on your development machine. Some are available from 3rd parties. This is not meant to be an exhaustive list, but rather a starting point to your exploration of these tools.

Tools that come with the Java runtime include jmap for doing heap dumps and gathering memory statistics, jstack for inspecting the threads running at any given time, jstat for general JVM statistic gathering, and jhat for analyzing heap dumps. Read more about these tools in the Oracle Docs or at IBM developer works

VisualVM combines all of the tools above into a GUI based package that is more friendly for some users.

YourKit is a good commercially available tool.

Heroku memory limits

The amount of physical memory available to your application on 512MB on a 1X dyno and 1024MB on a 2X dyno. Your application is allowed to consume more memory than this, but the dyno will begin to page it to disk. This can seriously hurt performance and is best avoided. You’ll see R14 errors in your application logs when this paging starts to happen.

The default support for most JVM based languages sets -Xmx384m and -Xss512k. These defaults will enable most applications to avoid R14 errors. See the Language Support Docs for your chosen language and framework for a full set of defaults.

Profiling memory use of a Java application on Heroku

The process isolation on Heroku’s cloud makes it impossible to use the local profiling tools mentioned above. If your memory issues cannot be reproduced locally there are some other options for getting diagnostic information from a running Heroku application.

Memory logging agent

The memory logging agent is an extremely lightweight java agent that can be used to send memory usage information to your logs.

To use it Download the jar and check it in to your Heroku project. Then update your Java command line options to turn the agent on for your application:

$ heroku config:set JAVA_OPTS='-Xmx384m -Xss512k -XX:+UseCompressedOops -javaagent:heroku-javaagent-1.4.jar=stdout=true,lxmem=true'

Note: this command will replace all of your JAVA_OPTS so the example prepends the default JVM options. If you’ve changed your JAVA_OPTS then you may have different values before -javaagent:heroku-javaagent-1.4.jar=stdout=true,lxmem=true.

The output in your logs will look something like:

source=web.1 measure.mem.jvm.heap.used=33M measure.mem.jvm.heap.committed=376M measure.mem.jvm.heap.max=376M
source=web.1 measure.mem.jvm.nonheap.used=19M measure.mem.jvm.nonheap.committed=23M measure.mem.jvm.nonheap.max=219M
source=web.1 measure.threads.jvm.total=21 measure.threads.jvm.daemon=11 measure.threads.jvm.nondaemon=1 measure.threads.jvm.internal=9

The Java memory logging agent is open source.

Verbose GC flags

If the above information is not detailed enough there are also some JVM options that you can use to get verbose output at GC time in your logs. Add the following flags to your Java opts: -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps

$ heroku config:set JAVA_OPTS='-Xmx384m -Xss512k -XX:+UseCompressedOops -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps'
2012-07-07T04:27:59+00:00 app[web.2]: {Heap before GC invocations=43 (full 0):
2012-07-07T04:27:59+00:00 app[web.2]:  PSYoungGen      total 192768K, used 190896K [0x00000000f4000000, 0x0000000100000000, 0x0000000100000000)
2012-07-07T04:27:59+00:00 app[web.2]:   eden space 188800K, 100% used [0x00000000f4000000,0x00000000ff860000,0x00000000ff860000)
2012-07-07T04:27:59+00:00 app[web.2]:   from space 3968K, 52% used [0x00000000ffc20000,0x00000000ffe2c1e0,0x0000000100000000)
2012-07-07T04:27:59+00:00 app[web.2]:   to   space 3840K, 0% used [0x00000000ff860000,0x00000000ff860000,0x00000000ffc20000)
2012-07-07T04:27:59+00:00 app[web.2]:  ParOldGen       total 196608K, used 13900K [0x00000000e8000000, 0x00000000f4000000, 0x00000000f4000000)
2012-07-07T04:27:59+00:00 app[web.2]:   object space 196608K, 7% used [0x00000000e8000000,0x00000000e8d93070,0x00000000f4000000)
2012-07-07T04:27:59+00:00 app[web.2]:  PSPermGen       total 50816K, used 50735K [0x00000000dda00000, 0x00000000e0ba0000, 0x00000000e8000000)
2012-07-07T04:27:59+00:00 app[web.2]:   object space 50816K, 99% used [0x00000000dda00000,0x00000000e0b8bee0,0x00000000e0ba0000)
2012-07-07T04:27:59+00:00 app[web.2]: 2012-07-07T04:27:59.361+0000: [GC
2012-07-07T04:27:59+00:00 app[web.2]: Desired survivor size 3866624 bytes, new threshold 1 (max 15)
2012-07-07T04:27:59+00:00 app[web.2]:  [PSYoungGen: 190896K->2336K(192640K)] 204796K->16417K(389248K), 0.0058230 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2012-07-07T04:27:59+00:00 app[web.2]: Heap after GC invocations=43 (full 0):
2012-07-07T04:27:59+00:00 app[web.2]:  PSYoungGen      total 192640K, used 2336K [0x00000000f4000000, 0x0000000100000000, 0x0000000100000000)
2012-07-07T04:27:59+00:00 app[web.2]:   eden space 188800K, 0% used [0x00000000f4000000,0x00000000f4000000,0x00000000ff860000)
2012-07-07T04:27:59+00:00 app[web.2]:   from space 3840K, 60% used [0x00000000ff860000,0x00000000ffaa82d0,0x00000000ffc20000)
2012-07-07T04:27:59+00:00 app[web.2]:   to   space 3776K, 0% used [0x00000000ffc50000,0x00000000ffc50000,0x0000000100000000)
2012-07-07T04:27:59+00:00 app[web.2]:  ParOldGen       total 196608K, used 14080K [0x00000000e8000000, 0x00000000f4000000, 0x00000000f4000000)
2012-07-07T04:27:59+00:00 app[web.2]:   object space 196608K, 7% used [0x00000000e8000000,0x00000000e8dc0330,0x00000000f4000000)
2012-07-07T04:27:59+00:00 app[web.2]:  PSPermGen       total 50816K, used 50735K [0x00000000dda00000, 0x00000000e0ba0000, 0x00000000e8000000)
2012-07-07T04:27:59+00:00 app[web.2]:   object space 50816K, 99% used [0x00000000dda00000,0x00000000e0b8bee0,0x00000000e0ba0000)
2012-07-07T04:27:59+00:00 app[web.2]: }

Heroku Labs: log-runtime-metrics

There is a Heroku labs feature called log-runtime-metrics. This will print diagnostic information including total memory use to your application logs.

New Relic

For some JVM languages and Java frameworks you can use the New Relic Java agent.

Memory tips for running on Heroku

  • Be mindful of thread use and stack size. The default option -Xss512k means that each thread will use 512kb of memory. The JVM default without this option is 1MB.
  • Be mindful of heavyweight monitoring agents. Some Java agents can use a significant amount of memory on their own and make memory problems worse while you try to troubleshoot your issue. If you’re having memory issues removing any agents is a good first step. The memory logging agent mentioned above has a very small memory footprint so it won’t cause these issues.

If you’re still having memory issues you can always contact Heroku Support.