Patch for PR bsps/1495 was Re: clock_get_uptime

Gedare Bloom gedare at
Sat Feb 20 23:46:53 UTC 2010

Bug description:

The Clock_isr is calling Clock_driver_support_at_tick() on every ISR.
In the pc386 BSP, this is causing an update to the pc586_tsc_at_tick
on every clock interrupt whether or not the BSP uses multiple ISRs per
clock tick. As a result, the return of
bsp_clock_nanoseconds_since_last_tick_tsc() is non-monotonic within
the span of a single (logical) clock tick. This behavior causes the
bug. The uptime clock relies on the nanoseconds since last tick to get
precise timing between (logical) clock tick events.

The attached patch adds a check in Clock_driver_support_at_tick() to
see if multiple ISRs occur per logical clock tick. If so, the value of
pc586_tsc_at_tick is only updated if the Clock_driver_isrs value is 0
(same mechanism used in the shared clock driver shell code).

Testcases: See the bug report for an init.c to recreate the bug.

The patch was tested on QEMU, with the pc386 BSP, both with and
without nanosecond timing.

I uploaded this patch to the Bugzilla as well.


On Sat, Feb 20, 2010 at 10:08 AM, Joel Sherrill
<joel.sherrill at> wrote:
> On 02/19/2010 04:35 PM, Gedare Bloom wrote:
>> Hi,
>> I've written some code to try generating a synthetic load with a
>> (somewhat) predictable execution time. Based on a brief chat on IRC
>> with Joel, I decided to try using the clock_get_uptime routine.  I see
>> that uptime is updated during the rtems_clock_tick routine, which is
>> typically called in the clock ISR provided by each individual BSP.
>> I implemented a load generating loop as part of an RTEMS app, using
>> the CVS head and pc386 BSP in QEMU.  I observed some interesting
>> behavior when calling uptime in a tight loop.  In particular, I get
>> some backwards time-travel.
>> Here is a snippet of the code:
>>     rtems_clock_get_uptime(&start_ts);
>>     while(FOREVER) {
>>       rtems_clock_get_uptime(&stop_ts);
>>       _Timespec_Subtract(&start_ts,&stop_ts,&diff_ts);
>>       if (_Timespec_To_ticks(&diff_ts)>= Tick_Count[argument]) break;
>>     }
>> here, start_ts, stop_ts, and diff_ts are all struct timespec, and
>> Tick_Count[argument] is a parametrized integer value.  What is
>> happening is that the loop appears to be ending prematurely.
>> An example of the values of start_ts, stop_ts, and diff_ts are:
>> (gdb) p start_ts
>> $4 = {
>>   tv_sec = 0,
>>   tv_nsec = 200701252
>> }
>> (gdb) p stop_ts
>> $5 = {
>>   tv_sec = 0,
>>   tv_nsec = 200325952
>> }
>> (gdb) p diff_ts
>> $6 = {
>>   tv_sec = -1,
>>   tv_nsec = 999624700
>> }
>> I can see that stop_ts<  start_ts, and that diff_ts reflects this
>> negative amount.  I'm trying to figure out why this is happening.  I
>> can trace stop_ts, and I am able to observe that it does in fact
>> increase, up to:
>> (gdb) p stop_ts
>> $27 = {
>>   tv_sec = 0,
>>   tv_nsec = 208797738
>> }
>> But then the next time I read stop_ts from clock_get_uptime, I get the
>> value 200325952.  Does anyone know why this might be happening?
> I have three guesses.  The first is the unlikely case where
> the tsc register from qemu isn't accurate.  But I doubt
> that.
> The second is that the code in score/src/coretodgetuptime.c
> or something it calls has a math mistake.
> The third is that even though we haven't processed a
> clock tick interrupt, the counter has "run to 0" or
> whatever that means to the pc386 clock driver.  A
> clock tick interrupt is pending and the driver math
> for nanoseconds since last tick isn't right in that case.
> If it is right most of the time and fails sometimes, then
> the 3rd case seems likely.
>> Thanks,
>> Gedare
>> _______________________________________________
>> rtems-users mailing list
>> rtems-users at
-------------- next part --------------
A non-text attachment was scrubbed...
Name: rtems-pc386-ckinit.patch
Type: text/x-patch
Size: 1253 bytes
Desc: not available
URL: <>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ChangeLog
Type: application/octet-stream
Size: 66 bytes
Desc: not available
URL: <>

More information about the users mailing list