Question

Consider this quote from Mark Russiniovich's books on Windows internals. This is about large-page allocation mechanism, intended for allocating large non-paged memory blocks in physical memory

http://books.google.com/books?id=CdxMRjJksScC&pg=PA194&lpg=PA194#v=onepage

Attempts to allocate large pages may fail after the operating system has been running for an extended period, because the physical memory for each large page must occupy a significant number (see Table 10-1) of physically contiguous small pages, and this extent of physical pages must furthermore begin on a large page boundary. (For example, physical pages 0 through 511 could be used as a large page on an x64 system, as could physical pages 512 through 1,023, but pages 10 through 521 could not.) Free physical memory does become fragmented as the system runs. This is not a problem for allocations using small pages but can cause large page allocations to fail.

If I understand this correctly, he's saying that fragmentation produced by scattered 4K pages can prevent successful allocation of large 2M pages in physical memory. But why? Ordinary 4K physical pages are easily relocatable and can also be easily swapped out. In other words, if we have a physical memory region not occupied by other 2M pages, we can always "clean it up": make it available by relocating any interfering 4K pages from that physical memory region to some other location. I.e. from the "naive" point of view, 2M allocations should "always succeed", as long as we have enough free physical RAM.

What is wrong with my logic? What exactly is Mark talking about when he says that physical memory fragmentation caused by 4K pages can prevent successful allocation of large pages?

Was it helpful?

Solution

It actually worked this way in Windows XP. But the cost was too prohibitive and a design change in Vista disabled this approach. Explained well in this blog post, I'll quote the essential part:

In Windows Vista, the memory manager folks recognized that these long delays made very large pages less attractive for applications, so they changed the behavior so requests for very large pages from applications went through the "easy parts" of looking for contiguous physical memory, but gave up before the memory manager went into desperation mode, preferring instead just to fail.

OTHER TIPS

He's talking about a specific problem that exists with allocating and freeing contiguous memory blocks over time, and you're describing a solution. Nothing is wrong with your logic and that's roughly what the .NET Garbage Collector does to reduce memory fragmentation. You're spot on.

If you have 10 seats per row at a baseball game and seats 2, 4, 6, and 8 are taken (fragmented), you will never be able to get 3 seats in that row for you and your friends unless you ask someone to move (compacted).

There's nothing special about the 4k blocks he's describing.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top