Results

Saurabh Gadia gadia at usc.edu
Fri Jun 26 00:42:30 UTC 2015


Ok.  should I will have complete trace files on drive. okkk.. I will upload
the trace and then mention about why we got error and how we solved the
problem of unbounded priorities.

Thanks,

Saurabh Gadia

On Thu, Jun 25, 2015 at 5:40 PM, Cyrille Artho <cyrille.artho at gmail.com>
wrote:

> Yes, let's discuss this in more depth on the devel list.
>
> Saurabh can clean up the code a bit first and prepare a summary of the
> error trace. The detailed trace can be posted on Google docs, and
> Saurabh can post a message describing the issue and a link to the doc
> on the mailing list.
>
> On Fri, Jun 26, 2015 at 9:35 AM, Joel Sherrill
> <joel.sherrill at oarcorp.com> wrote:
> >
> >
> > On June 25, 2015 7:33:24 PM CDT, Saurabh Gadia <gadia at usc.edu> wrote:
> >>@joel: It essentially means finding out the highest priority thread (or
> >>ceiling) of any remaining mutex held, right?
> >>
> >>If you are talking this in context of nested mutex problem --->
> >>
> >>We never go and find the highest priority thread waiting on held mutex.
> >>Inversely highest priority thread will recursively make changes to top
> >>of stack of held mutexes by the holder. We don't search.
> >
> > True recursion? Or implemented with loop?
> >
> > Ok. Still an O(n) operation. Discuss locking on devel@
> >
> >>
> >>Thanks,
> >>
> >>
> >>Saurabh Gadia
> >>
> >>
> >>On Thu, Jun 25, 2015 at 5:27 PM, Joel Sherrill
> >><joel.sherrill at oarcorp.com> wrote:
> >>
> >>
> >>
> >>On June 25, 2015 7:23:06 PM CDT, Cyrille Artho
> >><cyrille.artho at gmail.com> wrote:
> >>>Hi Gedare and all,
> >>>Good news:
> >>>In this morning's discussion, we fixed some remaining issues in the
> >>>model and now got results.
> >>>
> >>>The current model shows a case of priority inversion with a simple
> >>>priority reset mechanism and the absence of that problem with the
> >>>proposed strategy.
> >>>
> >>>There is only one flaw: We had to use a global lock in the model
> >>>(essentially making the entire "lock" and "unlock" operations atomic),
> >>>because so far we couldn't find a way to secure lock the operations on
> >>>a lower level. The model uses a list of locks held (by a thread) and a
> >>>list of threads waiting on a lock (in the lock data structure), so
> >>>these aspects are essentially cross-cutting and very hard to get right
> >>>without a global lock.
> >>>
> >>>The RTEMS code uses different macros inside these operations (_Seize,
> >>>_Surrender), which do not seem to use a global lock (but I may have
> >>>missed something). It is possible that a data race occurs for a
> >>>complex lock update strategy, if we don't use a global lock.
> >>>
> >>>Saurabh will clean up the model and provide documentation with
> >>details.
> >>>
> >>>In the meantime, can you maybe answer these questions for us:
> >>>
> >>>(1) Did you have any issues with race conditions in the current RTEMS
> >>>code (especially recent changes)?
> >>
> >>We haven't observed any but that doesn't mean there aren't any
> >>undiscovered.
> >>
> >>>(2) Is a fix using a global lock acceptable? Correctness-wise it's the
> >>>safest thing, but of course performance-wise it's a bottleneck.
> >>
> >>The locking strategy needs to be discussed. On a uniprocessor system,
> >>these issues tend to be minor and cause delays. On smp systems, they
> >>become more complex and performance problems.
> >>
> >>It essentially means finding out the highest priority thread (or
> >>ceiling) of any remaining mutex held, right?
> >>
> >>Bring this up on devel@
> >>
> >>
> >>--joel
> >
> > --joel
>
>
>
> --
> Regards,
> Cyrille Artho
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20150625/d4202e5e/attachment-0002.html>


More information about the devel mailing list