문제

I was reading a blog entry by Josh Smith where he used a cache mechanism in order to "reduce managed heap fragmentation". His caching reduces the number of short-lived objects being created at the cost of slightly slower execution speed.

How much of a problem is managed heap fragmentation in a managed language like C#? How can you diagnose if it's an issue? In what situations would you typically need to address it?

도움이 되었습니까?

해결책

When

Not too quickly. It is generally very cheap to have short lived objects. For a cache to be profitable there would have to be (very) many candiadates and they should live long enough to make it to the next generation.

How can you diagnose if it's an issue?

With a Profiler. I'm not so sure the author of the article did that.

How much of a problem is managed heap fragmentation in a managed language like C#?

As far as I know it is rare. .NET has a compacting Garbage collector, that prevents most forms of fragmentation. There are issues with the Large Object Heap sometimes.


Edit:

When you go through the comments below the article you will find that someone measured it and found the cache to be a lot slower than creating new eventargs each time.

Conclusion: Measure before you start optimizing. This was not a good idea/example.

다른 팁

Unless you are dealing with 10K+ small short-lived objects per second, it should not be an issue at all on a modern computer with reasonable amount of RAM.

So first you should run the code in all reasonable scenarios and if it's fast enough - don't worry about it.

If you are not happy with the speed, you see that code sometime 'chokes' or a just curious, you can monitor various .NET Memory stats (http://msdn.microsoft.com/en-us/library/x2tyfybc.aspx) in Performance Monitor app (comes as part of Windows). Specifically you are interested in % Time in GC.

redgate ANTS profiler also monitors these stats.

Managed heap fragmentation is usually because of pinning of objects. Objects get pinned when managed code passes the object pointer to the native code and the object cannot be moved because the reference is passed to the native code. This is very common when there is a lot of I/O activities. Like mentioned above it usually happens only in LOH.

Here is an example of Fragmentation in Gen0 Heap

Unlike other answers given here I state: yes, one should take care of fragmentation! It does not only apply to managed heaps but to all apps handling (at least)

  • many "large" ressources in
  • a heavy allocation pattern.

Since the LOB does not get compacted, it - over time - most probably will get fragmented as soon as the size and number of the objects exceed a certain value (which relates to the overall max heapsize available). If it does, the only safe way is to limit the number of instantly holded references to those objects. A cache (pool) would only help, if objects pooled can be reused. Sometimes, if these ressources are made of arrays of varying length f.e., they might not be reusable easily. So pooling may not help much here.

How to detect it? When ever there is large pressure on the LOB heap. How to find out it is? Use the .NET performance counter "Collection Count Gen 0...2" at the same time. If too many large objects are allocated from the LOB, all counters will evolve identically. Meaning, basically all collections are expensive generation 2 collections. In that case, there should something be done.

Regarding smaller objects, I would let the GC do all the work in Gen 0 collections and dont worry.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top