Question

I am recently learning Operating System. In paging, if we increase the page size then how this internal fragmentation will increase.

Was it helpful?

Solution

Quoting Wikipedia:

Rarely do processes require the use of an exact number of pages. As a result, the last page will likely only be partially full, wasting some amount of memory. Larger page sizes increase the potential for wasted memory this way, as more potentially unused portions of memory are loaded into main memory. Smaller page sizes ensure a closer match to the actual amount of memory required in an allocation.

As an example, assume the page size is 1024KB. If a process allocates 1025KB, two pages must be used, resulting in 1023KB of unused space (where one page fully consumes 1024KB and the other only 1KB).

So lets say you have a process with a total memory footprint of (9*1024KB + 100KB) (text, data, stack, heap), And you use 1024KB as page size, there will be 10 page faults on behalf of the process throughout its execution. Internal fragmentation is ~924KB.

Instead of 1024KB you now use page 102400KB (100 times previous size), now throughout the process life time there will be only 1 page fault, but internal fragmentation is really large. So that's how page size causes internal fragmentation. Although you save time spent for all those page faults, you are spending more time in swapping this really big page from swap space to main memory, as there will be other processes contending for space on main memory.

OTHER TIPS

We can't take pages as fraction we have to get full page always that's why if the no. of pages increase internal fragmentation will also increase.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top