FW: Interrupt handlers and RTEMS Queues (Breaking the Rules)

John Bebbington Bebbington.John at litef.de
Wed May 9 09:09:25 UTC 2001



-----Original Message-----
From: John Bebbington [mailto:Bebbington.John at litef.de]
Sent: Mittwoch, 9. Mai 2001 11:07
To: Nick.SIMON at syntegra.com
Cc: rtems-user at oarcorp.com
Subject: RE: Interrupt handlers and RTEMS Queues (Breaking the Rules)


Hi Nick, thanks for your comments.

1) is highest priority
2) not blocked by any other synchronisation points.
3) not busy.

BUT:

As days go by, and tests and experiments increase I am realising that the
RTEMS Q is not the problem.
The problem is simply performance!

Yesterday we removed the RTEMS queue and handled all processing in the ISR
and now the ISRs are
showing large jitters, we have to re-organise the data processing priorities
of several multi-rate messages so that the situation cannot arise when all
messages must be handled at the same time. I hope I can get this to work by
re-organisation and processing optimisation.


Again I would like to thank Joel Sherrill and the RTEMS users for their help
and suggestions. May be some day I will find a "real" problem in RTEMS, but
until then the score is 2:0 to OARcorp ;-)

kind regards
John Bebbington.


> -----Original Message-----
> From: Nick.SIMON at syntegra.com [mailto:Nick.SIMON at syntegra.com]
> Sent: Dienstag, 8. Mai 2001 18:16
> To: Bebbington.John at litef.de
> Subject: RE: Interrupt handlers and RTEMS Queues (Breaking the Rules)
>
>
> John,
>
> I would have thought your queue would get serviced PDQ.  Is it possuble
> that:
> (1) your queue service task is not the highest priority,
> (2) it is blocked waiting for something else, or
> (3) it is busy in some other respect?
>
> 3.5mS seems a long time to me, even for a 'mere' 68360 (Eh, when I were a
> lad we got Z80As and were grateful!)
>
> Events are faster, but they aren't counting constructs.  You
> could use one,
> but you'd have to implement your own queue (this is how the network Rx
> daemon works) - I suspect your problem isn't that RTEMS queues take a long
> time.
>
> -- Nick Simon
>
> > -----Original Message-----
> > From: John Bebbington [mailto:Bebbington.John at litef.de]
> > Sent: 07 May 2001 16:21
> > To: joel.sherrill
> > Cc: rtems-users
> > Subject: RE: Interrupt handlers and RTEMS Queues (Breaking the Rules)
> >
> >
> > Thanks for your comments Joel,
> >
> > Problem Background information:
> > ===============================
> > o RTEMS Target is a MC68360 running at 25 MHz.
> > o RTEMS System tick of 3.125 ms.
> >
> > We have a 400 Hz (2.5 ms) interrupt to handle and the data
> > (max 40 bytes)
> > from this 400 Hz ISR must be sent to a managed output driver
> > via an RTEMS
> > Queue (The driver must be managed because it has multiple
> > clients who want
> > to write data to the same output driver).
> >
> > The current SW version handles the ISR frequency okay but we are
> > experiencing that the RTEMS Queue sometimes delays up to 3.5
> > ms at times
> > before the data is processed. This causes data latency which is not
> > acceptable.
> >
> > There are other (background) tasks in the system but the
> > "blocked" task
> > which is waiting for data from the RTEMS queue has the
> > highest priority.
> >
> >
> > Comments:
> > =========
> >
> > From what you have said and after further looking at the Open
> > Sources, I can
> > see that it is true to say that a call to the directive
> > "rtems_message_queue_send" from a none RTEMS registered ISR would
> > potentially cause a IMMEDIATE context switch because of the call to
> > _Thread_Enable_dispatch() after _CORE_message_queue_Send has
> > performed its
> > write.
> >
> > I can also see from the implementation, that it is not
> > possible to call the
> > directive "rtems_message_queue_send" (as it explicitly says
> > in the user
> > manual in BOLD font) without first registering it with RTEMS.
> >
> > Thanks for your suggestion to use (at own risk) the internal
> > directives -
> > _Thread_Disable_dispatch and _Thread_Unnest_dispatch which
> > would essentially
> > prevent the excessive "house-keeping" that is being generated by the
> > interrupt frequency.
> >
> > I will try using this to see if I can improve the situation.
> >
> > QUESTION:
> > =========
> > Any ideas as to why the Q should delay so long? I have
> > understood that each
> > ISR, after writing its data to the queue, will cause a
> > reschedule and thus
> > the highest priority task will wake and process this without delays.
> >
> > Does anyone have any suggestions (apart from "get a faster
> > processor!") as
> > to how I could cleanly handle this kind of performance problem?
> >
> > Would using other RTEMS patterns (events perhaps, are they
> > really that much
> > quicker ? or semaphores ? ) be the solution.
> >
> >
> >
> >
> >
> > Many thanks
> >
> > John Bebbington
> >
> >
> >
> > > -----Original Message-----
> > > From: Joel Sherrill [mailto:joel.sherrill at OARcorp.com]
> > > Sent: Montag, 7. Mai 2001 14:14
> > > To: Bebbington.John at litef.de
> > > Cc: rtems-users at OARcorp.com
> > > Subject: Re: Interrupt handlers and RTEMS Queues (Breaking
> > the Rules)
> > >
> > >
> > >
> > >
> > > John Bebbington wrote:
> > > >
> > > > Hello,
> > > >
> > > > A quick question about the effect of not following the
> > > restrictions given in
> > > > the RTEMS users manual regarding Interrupt Service
> > Routines (ISRs) using
> > > > RTEMS queues and NOT returning through the RTEMS scheduler on RTE.
> > > >
> > > > Given the following scenario:
> > > > =============================
> > > > There are 2 ISRs (i.e. level 5 and level 6 hardware
> > priority) that write
> > > > data asychn. to the same RTEMS Queue but these ISRs were not
> > > configured in
> > > > the vector table by using the RTEMS directive
> > "rtems_interrupt_catch".
> > > > In addition there is a task waiting to read from the same Queue
> > > but blocked
> > > > when the queue is empty.
> > >
> > > One thought comes to mind ... don't do that. :)
> > >
> > > If you do not install the ISRs using rtems_interrupt_catch,
> > then RTEMS
> > > does not know about the ISRs.  Consequently, there is no
> > task scheduling
> > > at the end of the ISR and no notion of nest level to
> > properly account
> > > for
> > > interrupt nesting.  The most obvious problem in this
> > scenario is that a
> > > message send from either of these ISRs that readies a task
> > will think
> > > that
> > > it is OK to preemptively context switch IMMEDIATELY.  This is BAD!!
> > >
> > > When you ready a task or tasks in an ISR, the preemptive
> > context switch
> > > must be deferred until the end of the ISR.  Context switches occur
> > > between
> > > tasks -- not between an ISR and a task.  So even when you think
> > > you go from preemptively from an ISR to a new task there is
> > actually a
> > > short period of time when the old task runs.  This is VERY normal --
> > > even
> > > ticker does this.  The execution order is:
> > >
> > >   Init -> TA1  (via delete self)
> > >   TA1  -> TA2  (via delay)
> > >   TA2  -> TA3  (via delay)
> > >   TA3  -> IDLE (via delay)
> > >   ... 5 second pass ...
> > >   IDLE -> TA1  (at clock tick)
> > >
> > > Note that the call to rtems_clock_tick in the ISR actually unblocked
> > > TA1 and caused the preempt but really IDLE switched to TA1.  This is
> > > the "_ISR_Dispatch" path in the cpu_asm.S files.
> > >
> > > > Question:
> > > > =========
> > > > 1) Is the writing to the RTEMS queue still protected so that
> > > the level 6 ISR
> > > > may interrupt the level 5 ISR while the level 5 was  performing
> > > a write to
> > > > the RTEMS Queue? i.e. is the queues integrity still protected
> > > regardless of
> > > > not using the RTEMS directive "rtems_interrupt_catch"?
> > >
> > > The rationale behind this statement is that it is POSSIBLE to have
> > > non-RTEMS
> > > ISRs that are the highest priority in the system.  But
> > those ISRs can
> > > NOT
> > > use RTEMS services.
> > >
> > > > 2) Would the waiting task (providing it was the highest
> > > priority task in the
> > > > system) get "readied and start" on the next occurance of the
> > > RTEMS system
> > > > tick? For example if the tick was 3.125 ms and the ISR 5 wrote
> > > 3 messages in
> > > > the queue since the last tick, would the next tick handler
> > > recognise there
> > > > are entries in the queue and therefore ready the waiting task?
> > >
> > > I THINK it would probably work this way if no other RTEMS scheduling
> > > opportunities intervened.  But there is no  guarantee that the send
> > > would
> > > be properly intermeshed with other operations.  But it was
> > NOT designed
> > > to work this way.
> > >
> > > > Rational:
> > > > =========
> > > > The reason for the 2 questions is that due to performance a
> > > high interrupt
> > > > frequency we would like to try to optimize the amount of cpu
> > > time used in
> > > > "house keeping" by not having to always go through the RTEMS
> > > scheduler on
> > > > RTE. At the same time we want to be able to use rtems queues,
> > > even at the
> > > > cost of having a "reading" at only the system tick frequency.
> > >
> > > I THINK you can circumvent the procedure by doing this:
> > >
> > > _Thread_Disable_dispatch()
> > >   ... send()
> > > _Thread_Unnest_dispatch()
> > >
> > > These say "don't schedule until I tell you to" and "I am willing to
> > > schedule but I am NOT doing calling the scheduler".
> > >
> > > WARNING: This is completely untested and beyond the normal
> > operational
> > > patterns RTEMS was initially designed for.  I think that
> > the above will
> > > work but without some time on a whiteboard to really think
> > through it
> > > and give you better rules, I am not doing more than saying what you
> > > are doing now is dangerously broken and this is better than that.
> > >
> > > > Any comments would be gratefully received.
> > >
> > > Not knowing your CPU offhand, I can say that you might be
> > > architecturally
> > > lucky that this is working. :)  One some architectures the
> > distinction
> > > between task and ISR mode is so distinct that context
> > switching out of
> > > an ISR without getting back to task space is all but instanteously
> > > deadly.
> > >
> > > > John Bebbington.
> > >
> > > --
> > > Joel Sherrill, Ph.D.             Director of Research & Development
> > > joel at OARcorp.com                 On-Line Applications Research
> > > Ask me about RTEMS: a free RTOS  Huntsville AL 35805
> > >    Support Available             (256) 722-9985
> > >
> >
>
>
> ***********************************************************************
>
> Check us out at http://www.syntegra.com
>
> ***********************************************************************
>




More information about the users mailing list