Question

Does the RTOS play a major role or processor play a major role in determining the time for context switch ? What is the percentage of share between these two major players in determining the context switch time .

Can anyone tell with respect to uC/OS-II RTOS ?

Was it helpful?

Solution

I would say both are significant, but it is not really as simple as that:

The actual context switch time is simply a matter of the number of instruction cycles required to perform the switch, like anything in software it may be coded efficiently or it may not. On the other hand, all other things being equal, a processor with a large register set will require more instruction cycles to save the context; but having a large register set may make other code far more efficient.

A processor may also have an architecture that directly supports fast context switching. For example the lowly 8bit 8051 has four duplicate register banks; so a context switch is little more that a register bank switch (so long as you have not more that four threads), and given that Silicon Labs produce 8051 based devices at 100MIPS, that could be very fast indeed!

More sophisticated processors and operating systems may use an MMU to provide thread memory protection, this is an additional context switch overhead but with benefits that may override that. Also of course such processors generally also have high clock rates which helps.

So all in all, the processor speed, the processor architecture, the quality of the RTOS implementation, and the functionality provided by the RTOS may all affect context switch time. But in the end the easiest way to improve switch time is almost certainly to increase the clock rate.

Although it is nice to have more headroom, if context switch time is a make or break issue for your project on any reputable RTOS you should consider the suitability of either your hardware or your design. You should aim toward a design that minimises context switches. For example, if an ADC conversion takes 6us and a context switch takes 20us, then you would do better to busy-wait than to use a conversion-complete interrupt; better yet use DMA transfers to avoid context switches on single data items where possible.

OTHER TIPS

uC/OS-II RTOS is written in C, with some very specific sections(maybe in assembly) for the processor specific handling. The context switching will be part of the sections that are very specific to the processor.

So the context switch time will be very dependent on the processor selected and the specific sections used to adapt uC/OS-II to that processor. I believe all the source code is available so you should be able to see how much source is needed for a context switch. I also think uC/OS-II has callback's that may allow you to add some performance measuring code.

Just to complete on what Clifford was saying, context switching time also depends on the conditions that trigger the context switch, so mainly it depends on the benchmark.

Depending on the RTOS implementation, in some cases it's possible to switch directly to the first waiting process bypassing the scheduler altogether.

This of course gives a huge boost in some benchmarks.

For example we make some benchmark that measures the overhead (in µs) required to deliver a signal and switch to the high-priority process varying the particular kernel configuration and the target architecture: http://www.bertos.org/discover/context-switch-overhead

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top