Question

If we let the physical memory size remain constant,

  • What effect does the size of the page have on the number of frames?
  • What effect does the number of frames have on the number of page faults?

Also, please provide reference strings as an example.

Was it helpful?

Solution 2

After careful reading i have come to understand that this is a complex behavior when the page size is doubled the page faults is reduced also when the page size is halved. The replacement algorithm considered is FIFO.

note: page fault is denoted by p and no page fault by n
number of frames = physical memory size / page size

the page size = frame size

Reference string sequence: 1 2 3 4 5 1 2 3 4 5

Condition 1: Initial

physical memory = 400 bytes
page size = 100 bytes
number of frames = 4

1 2 3 4 5 1 2 3 4 5
p p p p p p p p p p
1 1 1 1 5 5 5 5 4 4
  2 2 2 2 1 1 1 1 5
    3 3 3 3 2 2 2 2
      4 4 4 4 3 3 3

total page fault = 10

Condition 2: When page size is halved

physical memory = 400 bytes
page size = 50 bytes
number of frames = 8

1 2 3 4 5 1 2 3 4 5
p p p p p n n n n n 
1 1 1 1 1 1 1 1 1 1
  2 2 2 2 2 2 2 2 2
    3 3 3 3 3 3 3 3
      4 4 4 4 4 4 4
        5 5 5 5 5 5

total page fault = 5 with 3 frames remaining unused.

Condition 1: when the page size is doubled.

physical memory = 400 bytes
page size = 200 bytes
number of frames = 2

Each frame can accommodate more data in one frame, here two original pages can be loaded to the frame at any time.

1 2 3 4 5 1 2 3 4 5 => (1 2) (3 4) (5 1) (2 3) (4 5) => 1 3 5 2 4

1 3 5 2 4
p p p p p 
1 1 5 5 4 
  3 3 2 2

Total page faults = 5

OTHER TIPS

Since the number of frames is equal to the size of the memory divided by the page size, increasing the page size will proportionately decrease the number of frames.

Having fewer frames will tend to increase the number of page faults because of the lower freedom in replacement choice. Imagine a system with four frames with the reference history of 0, 4, 3, 1. On a page fault, LRU would victimize frame 0. With a doubling of page size the reference history becomes 0a, 1b, 1a, 0b; so the LRU victim would be 1 (corresponding to small page frames 3 and 4) when one would prefer to victimize 0a (the first half of 0).

Large pages will also waste more space with internal fragmentation. If a typical process has three sections (text, heap, stack), on average about half a page per section is unused so 1.5 pages worth of memory are unused per process. If page size is doubled, this waste is doubled.

On the other hand, using larger pages will draw in more memory per fault, so the number of faults may decrease if there is limited contention and/or reasonably high spatial locality at the scale of page size (e.g., references to the high half of a double-size page occurring near in time to references to the low half would make replacement of a double-size page a close approximation of the replacement for the two smaller pages of which it is composed). (If memory is abundant, first reference [a.k.a. compulsory] page faults will tend to dominate, so larger pages will reduce the number of page faults.) Of course, the OS can use prefetching to accomplish the same reduction in faults with the benefit of being able to throttle prefetching under heavy memory pressure or poor prefetch behavior while avoiding the above mentioned disadvantages of large pages.

Of course, larger pages do reduce the number of TLB misses (sometimes called minor page faults), and an OS can support multiple page sizes and aggregate smaller pages to form larger pages (for reducing TLB misses) and deaggregate larger pages to from smaller pages (to reduce the volume of memory swapped and to reduce the above negative effects of larger pages).

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top