Question

I was recently asked a question that in a computer system, if the primary memory(RAM) is comparable to the secondary memory (HDD) then is there a need for virtual memory to be implemented in such a computer system ? Since paging and segmentation require context switching, which is purely processing overhead, would the benefits of virtual memory overshoot the processing overhead it requires ? Can someone help me with this question ? Thanku

Was it helpful?

Solution

I'm going to dump my understanding of this matter, with absolutely no background credentials to back it up. Gonna get downvoted? :)

First up, by saying primary memory is comparable to secondary memory, I assume you mean in terms of space. (Afterall, accessing RAM is faster than accessing storage).

Now, as I understand it,

Random Access Memory is limited by Address Space, which is the addresses which the operating system can store stuff in. A 32bit operating system is limited to roughly 4gb of RAM, while 64bit operating systems are (theoretically) limited to 2.3EXABYTES of RAM, although Windows 7 limits it to 200gb for Ultimate edition, and 2tb for Server 2008.

Of course, there are still multiple factors, such as

  • cost to manufacture RAM. (8gb on a single ram thingie(?) still in the hundreds)

  • dimm slots on motherboards (I've seen boards with 4 slots)

But for the purpose of this discussion let us ignore these limitations, and talk just about space.


Let us talk about how applications nowadays deal with memory. Applications do not know how much memory exists - for the most part, it simply requisitions it from the operating system. The operating system is the one responsible for managing which address spaces have been allocated to each application that is running. If it does not have enough, well, bad things happen.

But, surely with theoretical 2EXABYTES of RAM, you'd never run out?

Well, a famous person long ago once said we'd never need more than 64kBs of RAM.

Because most Applications nowadays are greedy (they take as much as the operating system is willing to give), if you ran enough applications, on a powerful enough computer, you could theoretically exceed the storage limits of the physical memory. In that case, Virtual Memory would be required to make up the extra required memory.

So to answer your question: (in my humble opinion formed from limited knowledge on the matter,) yes you'd still need to implement virtual memory.


Obviously take all this and do your own research. I'm turning this into a community wiki so others can edit it or just delete it if it is plain wrong :)

OTHER TIPS

It is true that with virtual memory, you are able to have your programs commit (i.e. allocate) a total of more memory that physically available. However, this is only one of many benefits if having virtual memory and it's not even the most important one. Personally, when I use a PC, I periodically check task manager to see how close I come to using my actual RAM. If I constantly go over, I go and I buy more RAM.

The key attribute of all OSes that use virtual memory is that every process has its own isolated address space. That means you can have a machine with 1GB of RAM and have 50 processes running but each one will still have 4GB of addressable memory space (32-bit OS assumed). Why is it important? It's not that you can "fake things out" and use RAM that isn't there. As soon as you go over and swapping starts, your virtual memory manager will begin thrashing and performance will come a halt. A much more important implication of this is that if each program has it's own address space, there's no way it can write to any random memory location and affect another program.

That's the main advantage: stability/reliability. In Windows 95, you could write an application that would crash entire operating system. In W2K+, it is simply impossible to write a program that paves all over its own address space and crashes anything other than self.

There are few other advantages as well. When executables and DLLs are loaded into RAM, virtual memory manager can detect when the same binary is loaded more than once and it will make multiple processes share the same physical RAM. At virtual memory level, it appears as if each process has its own copy, but at a lower level, it all gets mapped to one spot. This speeds up program startup and also optimizes memory usage since each DLL is only loaded once.

Virtual memory managers also allow you to perform file I/O by simply mapping files to pages in the virtual address space. In addition to introducing interesting alternative to working with files, this also allows for implementations of shared memory segments which is when physical RAM with read/write pages is intentionally shared between processes for extremely efficient inter-process communications (IPC).

With all these benefits, if we consider that most of the time you still want to shoot for having more physical RAM than total commit size and consider that modern CPUs have support for virtual address mapping built directly into the hardware, the overhead of having virtual memory manager is actually very minimal. On the other hand, in environments where many applications from many different vendors run concurrently, process address space is priceless.

Virtual memory working

It may not ans your whole question. But it seems the ans to me

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top