Question

I basically wanted to know what exactly a virtual processor is. At IBM's site they define it as:

"A virtual processor is a representation of a physical processor core to the operating system of a logical partition that uses shared processors. "

I understand that if there are x processors, each of which can simultaneously perform two operations, then the system can perform 2x operations simultaneously. But where does virtual processor fit into this. And i tried looking up the difference between a logical partition and other partitions such as primary but wasn't really sure.

Was it helpful?

Solution

I'd like to draw an analogy between virtual memory and virtual processors.

Start with expectations:

  • A user program is written against a set of expectation about what the memory looks like (an a nice flat, large, continuous memory model is the best...)
  • An OS system is written against a set of expectation of how the hardware performs (what CPU protection modes operation are available, how interrupts arrive and are blocked and handled, how to talk to IO devices, etc...)

Realize that expectation can be met directly by the hardware, or by an abstraction layer

  • Virtual memory is a set of (specialized, not found in simple chips) hardware tools and OS services that fake a user program into thinking that it has that nice, flat, large, continuous memory space, even while the OS is busily dividing the real memory into little piece, and storing some of them on disk, bringing other back, and otherwise making a real hash of it. But your code doesn't care. Everything just works.
  • A virtual processor system is a set of (specialized, not found in consumer CPUs) hardware tools and hypervisor services that allow your OS to believe it has direct access to one or more processors with the expected protection modes, interrupts, etc. even though the hypervisor is busily swapping whole OS contexts onto and off of one or more real processors, starting and stopping access to IO busses, and so on and so forth. But the OS doesn't care. Everything just works.

The hardware support to do this is has only recently started to be available in "desktop" CPUs, but Big Iron has had it for ages. It is useful for a couple of reasons

  1. Protection. In a properly protected OS, it is tough for one processes or user to spy on another. But since they can be resident in the same context, it may still be possible. Virtualizing OSs divides them by another, even thinner channel and makes it that much harder for data to leak, and malicious things to be done.
  2. Robustness. If you can swap OS contexts in and out you migrate them from one machine to anther and checkpoint and restart. Which allows for computers that detect failures on their own processors and recover gracefully.

These are the things (aside from millions of LOC of heavily debugged, mission critical code) that have kept people paying for Big Iron.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top