Why Large Object Heap and why do we care?

.NetGarbage CollectionClrLarge Object-Heap

.Net Problem Overview


I have read about Generations and Large object heap. But I still fail to understand what is the significance (or benefit) of having Large object heap?

What could have went wrong (in terms of performance or memory) if CLR would have just relied on Generation 2 (Considering that threshold for Gen0 and Gen1 is small to handle Large objects) for storing large objects?

.Net Solutions


Solution 1 - .Net

A garbage collection doesn't just get rid of unreferenced objects, it also compacts the heap. That's a very important optimization. It doesn't just make memory usage more efficient (no unused holes), it makes the CPU cache much more efficient. The cache is a really big deal on modern processors, they are an easy order of magnitude faster than the memory bus.

Compacting is done simply by copying bytes. That however takes time. The larger the object, the more likely that the cost of copying it outweighs the possible CPU cache usage improvements.

So they ran a bunch of benchmarks to determine the break-even point. And arrived at 85,000 bytes as the cutoff point where copying no longer improves perf. With a special exception for arrays of double, they are considered 'large' when the array has more than 1000 elements. That's another optimization for 32-bit code, the large object heap allocator has the special property that it allocates memory at addresses that are aligned to 8, unlike the regular generational allocator that only allocates aligned to 4. That alignment is a big deal for double, reading or writing a mis-aligned double is very expensive. Oddly the sparse Microsoft info never mention arrays of long, not sure what's up with that.

Fwiw, there's lots of programmer angst about the large object heap not getting compacted. This invariably gets triggered when they write programs that consume more than half of the entire available address space. Followed by using a tool like a memory profiler to find out why the program bombed even though there was still lots of unused virtual memory available. Such a tool shows the holes in the LOH, unused chunks of memory where previously a large object lived but got garbage collected. Such is the inevitable price of the LOH, the hole can only be re-used by an allocation for an object that's equal or smaller in size. The real problem is assuming that a program should be allowed to consume all virtual memory at any time.

A problem that otherwise disappears completely by just running the code on a 64-bit operating system. A 64-bit process has 8 terabytes of virtual memory address space available, 3 orders of magnitude more than a 32-bit process. You just can't run out of holes.

Long story short, the LOH makes code run more efficient. At the cost of using available virtual memory address space less efficient.


UPDATE, .NET 4.5.1 now supports compacting the LOH, GCSettings.LargeObjectHeapCompactionMode property. Beware the consequences please.

Solution 2 - .Net

The essential difference of Small Object Heap (SOH) and Large Object Heap (LOH) is, memory in SOH gets compacted when collected, while LOH not, as this article illustrates. Compacting large objects costs a lot. Similar with the examples in the article, say moving a byte in memory needs 2 cycles, then compacting a 8MB object in a 2GHz computer needs 8ms, which is a large cost. Considering large objects (arrays in most cases) are quite common in practice, I suppose that's the reason why Microsoft pins large objects in the memory and proposes LOH.

BTW, according to this post, LOH usually doesn't generate memory fragment problems.

Solution 3 - .Net

If the object's size is greater than some pinned value (85000 bytes in .NET 1), then CLR puts it in Large Object Heap. This optimises:

  1. Object allocation (small objects are not mixed with large objects)
  2. Garbage collection (LOH collected only on full GC)
  3. Memory defragmentation (LOH is never rarely compacted)

Solution 4 - .Net

The principal is that it unlikely (and quite possibly bad design) that a process would create lots of short lived large objects so the CLR allocates large objects to a separate heap on which it runs GC on a different schedule to the regular heap. http://msdn.microsoft.com/en-us/magazine/cc534993.aspx

Solution 5 - .Net

I am not an expert on the CLR, but I would imagine that having a dedicated heap for large objects can prevent unnecessary GC sweeps of the existing generational heaps. Allocating a large object requires a significant amount of contiguous free memory. In order to provide that from the scattered "holes" in the generational heaps, you'd need frequent compactions (which are only done with GC cycles).

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionManish BasantaniView Question on Stackoverflow
Solution 1 - .NetHans PassantView Answer on Stackoverflow
Solution 2 - .NetgrapeotView Answer on Stackoverflow
Solution 3 - .NetoleksiiView Answer on Stackoverflow
Solution 4 - .NetMyles McDonnellView Answer on Stackoverflow
Solution 5 - .NetChris ShainView Answer on Stackoverflow