What is garbage collection?
Garbage collection (GC) is a form of automatic memory management. In essence what the garbage collection does is to attempt to reclaim garbage, ergo memory occupied by objects that are no longer relevant for the active program, while allowing the developer to focus on the application without having to free memory. My particular interest is how the Java Virtual Machine (JVM) affects Apache Cassandra, since it was built in Java and it can have a big impact on performance.
The method used by the Java Virtual Machine (JVM) to track down all the live objects and to make sure that memory from non-reachable objects can be reclaimed is called the Mark and Sweep Algorithm. It comprises of two steps:
- Marking phase scans through all reachable objects and keep in native memory a ledger about all such objects.
- Sweeping makes sure the memory addresses allocated to non-reachable objects is reclaimed so that it can be used for new objects.
Different GC algorithms within the JVM such as CMS of G1GC implement these phases, but the concept explained above remains the same for all.
A crucial thing to consider is that in order for the garbage collection to happen the application threads need to be stopped, as you cannot count references to objects if they keep changing during the process. The temporary pause so that the JVM can perform “housekeeping” activities is called a Stop The World pause. These pauses can happen for multiple reasons, with garbage collection being the principal one.
Garbage collection in Java
Whenever sweeping occurs, and blocks of memory are reclaimed, fragmentation ensues. Memory fragmentation behaves much like disk fragmentation and can lead to multiple problems:
- Writing operations become more inefficient since finding the next suitable block of sufficient size is no longer a trivial operation.
- When creating new objects, the JVM allocates memory in contiguous blocks. When fragmentation increases to the point where no single available block is big enough to accommodate the newly created object, an allocation error occurs.
To avoid these problems, the JVM preforms a memory de-fragmentation during the garbage collection. This process moves all live objects close to each other, thereby reducing fragmentation.
Block compaction during a garbage collection
Weak Generational Hypothesis
Garbage collectors make assumptions about the way applications use objects. The most important of these assumptions is the weak generational hypothesis, which states that most objects survive for only a short period of time.
While naïve garbage collection examines every live object in the heap, generational collection exploits several empirical observed properties of most applications to minimize the work required to reclaim unused (garbage) objects.
- Most of the objects soon become unused.
- References from old objects to young objects only exist in small numbers.
Some objects live longer than others, and it follows a distribution close to the one below. An efficient collection is made possible by focusing on the fact that a majority of objects “die young.”
Weak Generational Hypothesis – Object life cycle
Based on this hypothesis, the JVM memory is divided into generations (memory pools holding objects of different ages.)
Most objects exist and die in the pool dedicated to young objects (the Young Generation pool.) When the Young Generation pool fills up, a minor compaction is triggered in which only the young generation is collected. The cost of such collections is proportional to the number of live objects being collected, but since the weak generational hypothesis states that most objects die young, the result is a very efficient garbage collection.
A fraction of the objects that survive the minor compaction get promoted to the Old Generation or Tenured Generation, significantly larger than the former and dealing with objects that are less likely to be garbage. Eventually, Tenured Generation will fill up and a major collection will ensue, in which the entire heap is collected. Major compactions usually last much longer than minor collections because a significantly larger number of objects are involved.
This approach also has some problems though:
- Objects from different generations may contain references to each other.
- Since GC algorithms are optimized for objects that either ‘die young’ of ‘will live a long time’, the JVM behaves poorly with objects with ‘medium’ life expectancy.
Memory Pools
Heap memory pools
Young Generation
The young generation is comprised of 3 different spaces.
- 1 Eden Space
- 2 Survivor spaces
Eden
Eden is the memory region where objects are allocated when they are created. Since we are usually talking about multi-threaded environments, where multiple threads are creating a lot of objects simultaneously, Eden is further divided into one or more Thread Local Allocation Buffer (TLAB.) These buffers allow each thread to allocate objects in the corresponding buffer, avoiding expensive lock-contention issues.
When the allocation is not possible inside a TLAB (not enough room), the allocation is done on a Shared Eden space. If no room exists in either the TLAB or the shared Eden Space, a Young Generation Garbage Collection is triggered to free up more space. In the case where the garbage collection did not free enough free memory inside the Eden pool, the object is allocated in the Old Generation.
After the marking phase of the garbage collection identifies all living objects within Eden, all of them are copied to one of the Survivor Spaces, and the entire Eden is cleaned, so that it can be used for new object allocation. This approach is called “Mark and Copy”: the live objects are marked, and then copied (not moved) to a survivor space.
Eden spaces
Survivor Spaces
Adjacent to the Eden space are two survivor spaces. An important thing to notice is that one of the two survivor spaces is always empty. Every time a young generation garbage collection happens, all of the live objects from the Eden space and live objects from the other survivor space (the whole live Young Generation objects) are copied to the other survivor space, leaving the former survivor space empty.
Survivor Spaces cycle
This garbage collection cycle of copying live objects between the two survivor spaces is repeated multiple times (x15) until objects are deemed old enough to be promoted to the old Generation, as they are expected to continue to be used for a long time.
To determine if an object is deemed “old enough” to be considered ready for promotion to the Old Space, whenever an object survives a GC cycle it has its age incremented. When the age exceeds a certain tenuring threshold, the object will be promoted.
The actual tenuring threshold is dynamically adjusted by the JVM, but one can adjust its upper limit by changing -XX:+MaxTenuringThreshold.
- Setting -XX:+MaxTenuringThreshold=0 results in immediate promotion without copying it between Survivor spaces.
- On modern JVMs -XX:+MaxTenuringThreshold, is set to 15 GC cycles by default. This is also the maximum value in HotSpot.
Promotion may also happen prematurely if the size of the Survivor space is not enough to hold all of the live objects in the Young generation.
Tenured Generation / Old Generation
Old Generation is usually much larger than the Young Generation, and it holds the objects that are less likely to become deprecated.
Garbage Collections in the Old Generation pool happens less frequently than in the Young Generation, but it also takes more time to complete resulting in bigger Stop the World pauses.
Since Objects are expected to live a long time in the Old Generation pool, the cleaning algorithms for the Old Generation are slightly different, there is no Mark and Copy as in the Young Generation but a Mark and Move instead, to minimize fragmentation.
PermGen & Metaspace
PermGen
As of Java 8, Permanent Generation (PermGen) space, which was created at startup and used to store metadata such as classes and interned strings is gone. The space where metadata information has now moved to native memory to an area known as the Metaspace.
The move was necessary because PermGen was really hard to tune, and it was also difficult to size the PermGen. This created a lot of issues to Java developers since it’s very difficult to predict how much space all that metadata would require, resulting in lots of java.lang.OutOfMemoryError: Permgen space exceptions. The way to fix this problem was to increase the PermGen size up to the maximum allowed size of 256MB:
java -XX:MaxPermSize=256m com.mycompany.MyApplication |
Metaspace
With the Metaspace, metadata (such as class definitions) are now located in the native memory and do not interfere with regular Heap Objects. By default, the Metaspace size is only limited by the amount of native memory available to the Java process, thus saving the developers from memory errors as described earlier. The downside of this is you still need to worry about the Metaspace footprint.
- By default class metadata allocation is limited by the amount of available native memory (capacity will of course depend if you use a 32-bit JVM vs. 64-bit along with OS virtual memory availability).
- The MaxMetaspaceSize, allows you to limit the amount of native memory used for class metadata. If you don’t use it, the metaspace will dynamically re-size depending on current demand.
No comments