Results

Saurabh Gadia gadia at usc.edu
Fri Jun 26 01:39:07 UTC 2015


You can configure the test environment according to your needs but 3 mutex
are sufficient to cover all test cases and we can have multiple threads
using verify() of JPF.

Thanks,

Saurabh Gadia

On Thu, Jun 25, 2015 at 6:37 PM, Saurabh Gadia <gadia at usc.edu> wrote:

> Hi all,
> For time being I have cleaned up the code. I have also attached the trace
> files where our JPF-Model can reproduce the RTEMS unbounded priority
> inheritance problem(TraceWupi). And after running it through solution model
> we are able to demonstrate that there are no assertion errors and it runs
> to completion.
>
> *Model Description: *
>
> #. Number of threads under test: 2
> #. Number of mutex under test: 2
> #. JPF-Github Link:https://github.com/saurabhgadia4/lock-model
> #. Wiki-Link: https://devel.rtems.org/wiki/GSoC/2015/NestedMutex
>
> #. sanity test function for detecting unbounded priority inheritance
> problem:
>
> https://github.com/saurabhgadia4/lock-model/blob/master/rtems/Mutex.java#L119
> /*
> Validator() checks that after stepping down the priority, on unlock()
> operation, there should be no higher priority thread contending on any of
> the mutex still held by holder.
> */
>
> Unbounded Priority inheritance demonstrating trace file:
>
> https://github.com/saurabhgadia4/lock-model/blob/master/TraceWupi#L83
>
> Test Scenario:
> 1. Thread 1 which have *priority=3* acquires mutex l1, l2.
> 2. Thread 2 with priority=1 tries to acquire mutex l1. So with current
> RTEMS we just increase the priority of thread 1.
> 3. So thread 1 currentPriority becomes 1. and thread 2 sits in waitqueue
> of mutex l1 whose holder is thread1.
> 4. Now thread 1 happen to release mutex l2 and its priority is stepped
> down to 3 again from 1 which was raised by thread 2.
> !!!Bingo this is our unbounded priority inheritance problem which we can
> successfully catch in our validator function which raises assertion error!!!
> so line 83, 84, 85 in TraceWupi
>
>
>
>
> *Holder Thread: 1, after stepdown operation -->current priority: 3 while
> releasing mutex: 2--->Mutex: 1  (has this mutex in its stack
> yet)------>Thread-id: 2 priority: 1 (Thread 2 is contending on mutex 1
> which is higer priority than the holder which is Thread 1)*
>
> *#. Sanity test passes for our solution of handling unbounded priority
> Inheritance(UPI). *
>    tracefile:
> https://github.com/saurabhgadia4/lock-model/blob/master/TraceW_O_upi
>
>
> *Highlighting points of this JPF-Model test:*
>
> Potential present problems in RTEMS:
>    *problem 1:
>       Test Case:
>            Thread 3 with priority=2 acquires mutex l2. Thread 1 of
> priority=3 is waiting on waitqueue of mutex2 and simultaneously holds
> mutex1 on which thread 2 is contending. thread 2 raises priority of thread
> 1 from 3 to 1 and waits in l1 waitqueue. Thread 1 was already waiting in
> queue for mutex 2 which is acquired by thread 3 whose priority is 2. So
> again we got unbounded priority inheritance problem with thread 1 with
> raised priority=1(waiting) and thread 3 with priority=3 is running.
>  >>>> Correct Behavior: after raising thread 1 priority we reEnqueue
> thread1 if it is waiting with the updated priority. We should immediately
> update the priority of thread 3 on which thread 1 waits<<<<
>
>    *problem 2:
>       When mutex is held by a thread and some other thread tries to
> acquire the same mutex. The updatePriority() in lock() and
> waitqueue.Remove() happens simultaneously. We solved this by having the
> global lock() for each mutex. (May be Cyrille can elaborate on this)
>
>
>
> Thanks,
>
> Saurabh Gadia
>
> On Thu, Jun 25, 2015 at 5:43 PM, Saurabh Gadia <gadia at usc.edu> wrote:
>
>> O(n) is the least that we can have!!
>>
>> Thanks,
>>
>> Saurabh Gadia
>>
>> On Thu, Jun 25, 2015 at 5:39 PM, Saurabh Gadia <gadia at usc.edu> wrote:
>>
>>> I mean not recursion. just looping till top of stack of held mutexes.
>>> check this out:
>>> https://github.com/saurabhgadia4/lock-model/blob/master/rtems/Mutex.java
>>> -->updateRecPriority() line 143
>>>
>>> Thanks,
>>>
>>> Saurabh Gadia
>>>
>>> On Thu, Jun 25, 2015 at 5:35 PM, Joel Sherrill <
>>> joel.sherrill at oarcorp.com> wrote:
>>>
>>>>
>>>>
>>>> On June 25, 2015 7:33:24 PM CDT, Saurabh Gadia <gadia at usc.edu> wrote:
>>>> >@joel: It essentially means finding out the highest priority thread (or
>>>> >ceiling) of any remaining mutex held, right?
>>>> >
>>>> >If you are talking this in context of nested mutex problem --->
>>>> >
>>>> >We never go and find the highest priority thread waiting on held mutex.
>>>> >Inversely highest priority thread will recursively make changes to top
>>>> >of stack of held mutexes by the holder. We don't search.
>>>>
>>>> True recursion? Or implemented with loop?
>>>>
>>>> Ok. Still an O(n) operation. Discuss locking on devel@
>>>>
>>>> >
>>>> >Thanks,
>>>> >
>>>> >
>>>> >Saurabh Gadia
>>>> >
>>>> >
>>>> >On Thu, Jun 25, 2015 at 5:27 PM, Joel Sherrill
>>>> ><joel.sherrill at oarcorp.com> wrote:
>>>> >
>>>> >
>>>> >
>>>> >On June 25, 2015 7:23:06 PM CDT, Cyrille Artho
>>>> ><cyrille.artho at gmail.com> wrote:
>>>> >>Hi Gedare and all,
>>>> >>Good news:
>>>> >>In this morning's discussion, we fixed some remaining issues in the
>>>> >>model and now got results.
>>>> >>
>>>> >>The current model shows a case of priority inversion with a simple
>>>> >>priority reset mechanism and the absence of that problem with the
>>>> >>proposed strategy.
>>>> >>
>>>> >>There is only one flaw: We had to use a global lock in the model
>>>> >>(essentially making the entire "lock" and "unlock" operations atomic),
>>>> >>because so far we couldn't find a way to secure lock the operations on
>>>> >>a lower level. The model uses a list of locks held (by a thread) and a
>>>> >>list of threads waiting on a lock (in the lock data structure), so
>>>> >>these aspects are essentially cross-cutting and very hard to get right
>>>> >>without a global lock.
>>>> >>
>>>> >>The RTEMS code uses different macros inside these operations (_Seize,
>>>> >>_Surrender), which do not seem to use a global lock (but I may have
>>>> >>missed something). It is possible that a data race occurs for a
>>>> >>complex lock update strategy, if we don't use a global lock.
>>>> >>
>>>> >>Saurabh will clean up the model and provide documentation with
>>>> >details.
>>>> >>
>>>> >>In the meantime, can you maybe answer these questions for us:
>>>> >>
>>>> >>(1) Did you have any issues with race conditions in the current RTEMS
>>>> >>code (especially recent changes)?
>>>> >
>>>> >We haven't observed any but that doesn't mean there aren't any
>>>> >undiscovered.
>>>> >
>>>> >>(2) Is a fix using a global lock acceptable? Correctness-wise it's the
>>>> >>safest thing, but of course performance-wise it's a bottleneck.
>>>> >
>>>> >The locking strategy needs to be discussed. On a uniprocessor system,
>>>> >these issues tend to be minor and cause delays. On smp systems, they
>>>> >become more complex and performance problems.
>>>> >
>>>> >It essentially means finding out the highest priority thread (or
>>>> >ceiling) of any remaining mutex held, right?
>>>> >
>>>> >Bring this up on devel@
>>>> >
>>>> >
>>>> >--joel
>>>>
>>>> --joel
>>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20150625/7ec8799e/attachment-0002.html>


More information about the devel mailing list