Time spent in ticks...

Joel Sherrill joel at rtems.org
Thu Oct 13 15:38:25 UTC 2016

On Thu, Oct 13, 2016 at 3:51 AM, Jakob Viketoft <
jakob.viketoft at aacmicrotec.com> wrote:

> Hello everyone,
> We're running on an or1k-based BSP off of 4.11 (with the patches I've
> forwarded in February last year) and have seen some strange sluggishness in
> the system. When measuring using a standalone peripheral clock, I can see
> that we spend between 0.8 - 1.4 ms just handling the tick. This sounds a
> bit absurd to me and I just wanted to send out a couple of questions to see
> if anyone has an inkling of what is going on. I haven't been able to test
> with the or1k-simulator (and the generic_or1k BSP) as it won't easily
> compile with a newer gcc, but I'm running on real hardware. The patches I
> made don't sound like big hold-ups to me either, but a second pair of eyes
> is of course always welcome.
> To the questions:
> 1. On the or1k-cpu RTEMS bsp, timer ticks are using the cpu-internal
> timer, which when timing out results in a timer exception. Clock_isr is
> installed as the exception handler for this and thus have complete control
> of the cpu for it's duration. Is this as the Clock_isr is intended to run,
> i.e. no other tasks or interrupts are allowed during tick handling? Just
> want to make sure there is no mismatch between the or1k setup in RTEMS and
> how Clock_isr is intended to run.
> 2. Running a very simple test application with three tasks, I delved into
> the _Timecounter_Tick part of the Clock_isr and I have seen the tc_windup()
> is using ~340 us quite consistently and _Watchdog_Tick() is using ~630 when
> all tasks are started. What numbers can be seen at other systems, i.e. what
> should I expect as normal here? Any ideas on what can be wrong? I'll keep
> digging and try to discern any individual culprits as well.
I don't have an or1k handy so ran on a sparc/erc32 simulator/
It is is a SPARC v7 at 15 Mhz.

These times are in microseconds and based on the tmtests.
Specifically tm08and tm27.

(1) rtems_clock_tick: only case - 52
(2) rtems interrupt: entry overhead returns to interrupted task - 12
(3) rtems interrupt: exit overhead returns to interrupted task - 4
(4) rtems interrupt: entry overhead returns to nested interrupt - 11
(5) rtems interrupt: exit overhead returns to nested interrupt - 3

The clock tick test has 100 tasks but it looks like they are blocked on a
without timeout.

Your times look WAY too high. Maybe the interrupt is stuck on and
not being cleared.

On the erc32, a nominal "nothing to do clock tick" would be 1+2+3 from
above or 52+12+4 = 68 microseconds. 68 * 15 = 1020 machine cycles.
So at a higher clock rate, it should be even less time.

My gut feeling is that I think something is wrong with the ISR handler
and it is stuck. But the performance is definitely way too high.


> Oh, and we use 10000 as base for the tick quantum.
> (If anyone is interested in looking at our code, bsps and toolchains can
> be downloaded at repo.aacmicrotec.com.)
> Best regards,
>       /Jakob
> Jakob Viketoft
> Senior Engineer in RTL and embedded software
> ÅAC Microtec AB
> Dag Hammarskjölds väg 48
> SE-751 83 Uppsala, Sweden
> T: +46 702 80 95 97
> http://www.aacmicrotec.com
> _______________________________________________
> devel mailing list
> devel at rtems.org
> http://lists.rtems.org/mailman/listinfo/devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20161013/172ef8d2/attachment-0002.html>

More information about the devel mailing list