Very long time for context switch on SH2

Ralf Corsepius ralf_corsepius at
Mon Mar 22 12:38:26 UTC 2004

On Mon, 2004-03-22 at 11:25, James Yates wrote:
> I am working on a custom BSP which has an SH2 7145f. I have performed
> a basic port for this SH7145 
> cpu based on the SH7045 already in the tree. The 2 chips are very
> similar. However, after some 
> analysis, the shortest time I seem to be able to measure for a context
> switch is 192us, which I think is pretty slow.

The code for the context switch can be found in __CPU_Context_switch in

Counting the asm instructions in there gives you a rough estimate of
cpu-cycles being required (ca. 50 cpu-cycles) for a context switch.

>  Am I wrong?
Yes, 192e-6 would be pretty slow, but to be able to further comment on
this requires more knowledge about your actual setup and how you
measured this figure.

> I have been muckng around with the system clock to see if by increasing the ticks per second, the 
> context switch time could be reduced but I can't get any better than 192us.
How did you measure this timings? Normally, the context switching time
should be inversely proportional to the CPU clock rate.

The time between task switches is application and setup dependent.

> Does anyone have any ideas?
One crucial point on SH1s and SH2s is their memory. 

Unlike many other CPUs they can be equipped with non-consecutive memory
(Our AMOS boards the gensh1 and gensh2 are derived from, had such
non-consecutive memory), which can apply different wait-states/memory
timings etc., which can have unexpected influences on task-switching

Another factor to consider is IRQs, IO and debugging stubs. 
Esp. the latter 2 can distort measurements because they often apply
"busy waiting"/"polling on IO-registers".

Also, debugging stubs often are implemented as ISRs being attached to a
very high priority interrupt, which could disturb RTEMS. 

>  Is there an Idle task that always runs?
IIRC, yes. 

>  Could this be causing any problems?
Theoretically yes, but not to the extend you seem to be experiencing.


More information about the users mailing list