Question

I mean if quantum time of thread is 20 ms, so some time (15 ms) of 20 is spent in context switch and other (5 ms) on execution?

Était-ce utile?

La solution

I would dump any OS that spends 75% of its time context-switching on an average basis. I would only expect such a load on a transitive basis when a lot of prioritised threads get made ready in an unfortunate, rapid sequence by I/O interrupts/signals and so cause 'sequential' changes in the set of ready threads.

It would be better is developers/posters stopped using terms like 'quantum' and 'time-slice' so often when referring to preemptive kernels. Except on grossly-overloaded boxes, the tick interrupt is only useful for timing out other blocking calls and sleeping for intervals.

Who came up with 'quantum' as a term for this anyway? A 'quantum' is indivisible, whereas 99.9% of all, household threads are waiting on I/O or each other for most of the time, run for less than the tick period, are immediately allocated a core and made running when they become ready and hardly ever experience being peempted just because of their 'time-slice' is up.

'time-slice' sounds like something from the 60's, not in 2012 with preemptive kernels responding rapidly to driver interrupts/signals and immediately making those thread/s that were waiting ready/running.

Autres conseils

It is an implementation detail.

What happens on Linux is that when the process/thread scheduler assigns a CPU to a thread, that thread is considered running. The code it executes switching from the kernel mode back to the user mode is considered kernel code executed on behalf of that process/thread and hence the context switch time is accounted as the thread/process run-time.

There is no such thing as 'Context Switch in .NET', since .NET is not an operating system and all those thread-related CLR libraries are just wrappers around OS's API.

So the question should be something like: "how long it takes for context switch on Windows?"

I guess it depends on implementations details.

The tenth edition of "Operating System Concepts" (Silberschartz, Gagne, Galvin) says when explaining Round Robin Scheduling (page 210):

"The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and dispatches the process."

Where the dispatch process, defined on page 203, includes:

  • context switch
  • switch to user mode
  • jumping to the PC

There are also statements that indicates the context switch is included in the quantum or time slice. For example, in RR again, it says:

"Each process must wait no longer than (n-1) x q time units until its next quantum."

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top