[PATCH] Correct a race condition between the e500 core decrementer wrapping and the tick interrupt being handled

Nick Withers nick.withers at anu.edu.au
Fri Dec 12 02:18:12 UTC 2014


On Wed, 2014-12-10 at 08:42 +0100, Sebastian Huber wrote:
> On 10/12/14 02:53, Nick Withers wrote:
> > On Wed, 2014-12-03 at 08:09 +0100, Sebastian Huber wrote:
> >> Hello Nick,
> >>
> >> to disable TCR[ARE] is not the right fix.  You have to check the
> >> TSR[DIS] register in the nanoseconds extension.  See for example
> >> lpc-clock-config.c for an example.
> > Hi Sebastian,
> >
> > Thanks for your help, 'fraid I need to lean on you a little more...
> >
> > I don't understand how the proposed solution (or that in
> > lpc-clock-config.c) can work properly in the case where the nanoseconds
> > extension could be queried multiple times with an interrupt lock held
> > (which I believe disables interrupts), preventing the tick from being
> > incremented, as I believe happens in spnsext01.
> >
> > lpc-clock-config.c's lpc_clock_nanoseconds_since_last_tick(), if I'm
> > understanding it correctly, says "Ah, there's an interrupt pending, so
> > we need to add another tick period to our current timer value". Cool
> > bananas.
> 
> Yes, this is exactly how it works.
> 
> > But what if it's called again with the same interrupt still
> > pending? The timer counter may have wrapped again and we could return an
> > earlier time.
> >
> > ...Couldn't we? What am I missing?
> >
> > Cheers :-)
> 
> If you disable interrupts for more than one clock tick interval, then 
> you have a bigger problem than a non-monotonic uptime.

Yeah, that's probably fair...

> If you disable the auto-reload, then the decrementer stops, so you have 
> a time freeze.

I suppose that didn't worry me as much as non-monotonic time. Probably
because I'm a selfish so-and-so and not personally writing a real-time
application with strict periodic tasking requirements (and don't
particularly care about the uptime drifting) :-P

A new patch is attached, thanks for your help and guidance!

It's a little more convoluted than might be necessary because I wasn't
confident that every relevant BSP's Clock_Decrementer_value (some of
which are set at run-time) was small enough not to overflow the uint32_t
tmp variable in intermediate calculations if I just straight-up doubled
the potential range of values it might need to hold (with the extra
clock tick period that might now be returned). It potentially adds
another run-time division this way, though. I could just use a
uint64_t...?


P.S.: For my education (because noone has anything better to do,
right?)... Is there a more general solution? Maybe servicing the tick
increment in the nanoseconds call if an interrupt's been generated (is
servicing an interrupt out-of-band "usually" legitimate across
architectures / PICs?) but deferring e.g. scheduling 'til interrupts're
re-enabled?

If not, I guess it should be guaranteeable / assumable that interrupts
should never be blocked for longer than x cycles (when users can't
disable them themselves)?

Would the FreeBSD clock code you mentioned earlier handle it any better?


P.P.S.: Clock_driver_nanoseconds_since_last_tick() also seems broken in
the non-Book-E case if the decrementer value is (one's complement)
negative
-- 
Nick Withers

Embedded Systems Programmer
Department of Nuclear Physics, Research School of Physics and Engineering
The Australian National University (CRICOS: 00120C)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: e500_decrementer.patch
Type: text/x-patch
Size: 2340 bytes
Desc: not available
URL: <http://lists.rtems.org/pipermail/devel/attachments/20141212/65a73335/attachment-0002.bin>


More information about the devel mailing list