The eCos, QNX, ChorusOs irq handling API
Till Straumann
strauman at SLAC.Stanford.EDU
Thu Feb 20 22:14:24 UTC 2003
Valette Eric wrote:
> Till Straumann wrote:
>
>>
>>> But the section is usually
>>
>
>
>>
>>
>> LOCK(pic)
>
> -1) Get current PIC Masks. Store it on the stack
>
>> 0) Mask this vector's bit
>>
>>> 1) Mask other lower priority interrupts by mapipulating PIC
>>> masks
>>
>>
>> UNLOCK(pic)
>>
>>> 2) execute the handler,
>>
>>
>> LOCK(pic)
>>
>>> 3) Restore original pic masks
>>
>
> That is why I have added the -1)
>
>>
>> UNLOCK(pic)
>>
>>>
>>> This cannot be guaranteed to be *ALWAYS* short...
>>>
>>
>> No, but you'd only mutex steps 1 and 3 individually (I've added
>> the LOCK/UNLOCKs above). If the only
>> permitted action for the handler is re-enabling its own vector,
>> or the priorities below itself (the only thing that probably makes
>> sense) everything is fine (otherwise, the PIC driver has to manage
>> a stack of masks tracking the ISR nesting level).
>> Of course, the handler has to use an API
>> call to do that it's not manipulating the PIC directly.
>
>
> But my point is that if on a second proc sharing the PIC, you execute
> the same sequence but starting at stage 2) of the sequence on the first
> proc, because the interrupt is not currently masked, even with tyhe
> locks you will end up restoring a PIC value that is wrong because some
> interrupt will remain masked. Or did I miss something?
Well - when you have multiple CPUs sharing one PIC, your approach
of handling the masks doesn't work anyways, even if the ISR is forbidden
to touch the mask. Take the scenario:
CPU 1 is interrupted, saves the current mask (M0) on its ISR stack
CPU 1 sets new mask (M1) to disable interrupter + lower priorities
CPU 1 does EOI
CPU 1 re-enables interrupts at processors
CPU 1 starts work on user ISR
CPU 2 gets hi priority irq, saves current mask (M1) on ITS ISR stack
CPU 2 sets new mask (M2) to disable hi PRI interrupter + lower prios
CPU 2 does EOI
CPU 2 re-enables interrupts at processors
CPU 2 starts working on user handler for hi priority IRQ
HERE comes the race condition: assume CPU1 terminates work before CPU2
CPU 1 terminates user handler
CPU 1 disables interrupts
CPU 1 restores mask (M0) from its stack
CPU 1 re-enables interrupts
at this point, the ORIGINAL mask (M0) is in place, masking NOTHING.
Then, eventually,
CPU 2 terminates user handler
CPU 2 disables interrupts
CPU 2 restores mask (M1) from its stack
CPU 2 re-enables interrupts
Finally: the system ends up in a state where interrupts at the
priority level of the first interrupter (M1) are in place.
Well - this happens because you (implicitely) use the ISR stack
for keeping track of nested PIC masks. On an SMP, each CPU would have
an ISR stack and that would not be the proper place for managing the
save/restore of masks. Instead of saving the PIC masks in automatic
variables, you'd have to save them in something like
static masks[MAX_NESTING_LEVEL];
and let the ISR save it to
masks[irqNestingLevel];
(assuming that multiple CPUs share one nesting level and one PIC).
And still, proper management of that data structure is tricky,
because an ISR at a lower nesting level may terminate on one CPU
_before_ a more deeply nested ISR terminates on a second CPU
;-)
But all of these are not really API issues but implementation
details...
>
>> It _is_ done immediately: by the Universe hardware - beyond software
>> control :-)
>
>
> Great. BTW you did not answer about the possibility to mask the current
> opepnpic irq via a mask and issue the openpic acknowledge ASAP. I
> have'nt OPENPIC doc anymore but from memory I did not find means to do
> it.
I didn't answer because I didn't have the doc ready, either - but
I got your point.
> Maybe worth investaigating because as you pointed out, the code is
> suboptimal. You cannot find agains broken hardware API :-)
>
sure
-- Till
More information about the users
mailing list