<div dir="ltr">You can configure the test environment according to your needs but 3 mutex are sufficient to cover all test cases and we can have multiple threads using verify() of JPF.<br></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr">Thanks,<div><br></div><div>Saurabh Gadia</div></div></div></div>
<br><div class="gmail_quote">On Thu, Jun 25, 2015 at 6:37 PM, Saurabh Gadia <span dir="ltr"><<a href="mailto:gadia@usc.edu" target="_blank">gadia@usc.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div>Hi all,<br></div>For time being I have cleaned up the code. I have also attached the trace files where our JPF-Model can reproduce the RTEMS unbounded priority inheritance problem(TraceWupi). And after running it through solution model we are able to demonstrate that there are no assertion errors and it runs to completion.<br><br></div><b>Model Description: </b><br><br></div>#. Number of threads under test: 2<br>#. Number of mutex under test: 2<br></div><div>#. JPF-Github Link:<a href="https://github.com/saurabhgadia4/lock-model" target="_blank">https://github.com/saurabhgadia4/lock-model</a><br></div><div>#. Wiki-Link: <a href="https://devel.rtems.org/wiki/GSoC/2015/NestedMutex" target="_blank">https://devel.rtems.org/wiki/GSoC/2015/NestedMutex</a><br></div><br>#. sanity test function for detecting unbounded priority inheritance problem:<br><a href="https://github.com/saurabhgadia4/lock-model/blob/master/rtems/Mutex.java#L119" target="_blank">https://github.com/saurabhgadia4/lock-model/blob/master/rtems/Mutex.java#L119</a> <br>/*<br>Validator() checks that after stepping down the priority, on unlock() operation,
there should be no higher priority thread contending on any of the mutex still held by holder. <br>*/ <br><div> <br></div><div>Unbounded Priority inheritance demonstrating trace file:<br><br><a href="https://github.com/saurabhgadia4/lock-model/blob/master/TraceWupi#L83" target="_blank">https://github.com/saurabhgadia4/lock-model/blob/master/TraceWupi#L83</a><br></div><div><br></div><div>Test Scenario:<br></div><div>1. Thread 1 which have <b>priority=3</b> acquires mutex l1, l2.<br></div><div>2. Thread 2 with priority=1 tries to acquire mutex l1. So with current RTEMS we just increase the priority of thread 1.<br></div><div>3. So thread 1 currentPriority becomes 1. and thread 2 sits in waitqueue of mutex l1 whose holder is thread1.<br></div><div>4. Now thread 1 happen to release mutex l2 and its priority is stepped down to 3 again from 1 which was raised by thread 2.<br></div><div>!!!Bingo this is our unbounded priority inheritance problem which we can successfully catch in our validator function which raises assertion error!!!<br></div><div>so line 83, 84, 85 in TraceWupi<br><b>Holder Thread: 1, after stepdown operation -->current priority: 3 while releasing mutex: 2<br>--->Mutex: 1 (has this mutex in its stack yet)<br>------>Thread-id: 2 priority: 1 (Thread 2 is contending on mutex 1 which is higer priority than the holder which is Thread 1)<br><br></b></div><div><b>#. Sanity test passes for our solution of handling unbounded priority Inheritance(UPI). <br></b></div><div><b> </b>tracefile: <a href="https://github.com/saurabhgadia4/lock-model/blob/master/TraceW_O_upi" target="_blank">https://github.com/saurabhgadia4/lock-model/blob/master/TraceW_O_upi</a><br><br><br></div><div><u>Highlighting points of this JPF-Model test:</u><br><br></div><div>Potential present problems in RTEMS:<br></div><div> *problem 1: <br></div><div> Test Case:<br> <span>Thread 3 with priority=2 acquires mutex l2. Thread 1 of priority=3 is waiting on waitqueue of mutex2 and simultaneously holds mutex1 on which thread 2 is contending. thread 2 raises priority of thread 1 from 3 to 1 and waits in l1 waitqueue. Thread 1 was already waiting in queue for mutex 2 which is acquired by thread 3 whose priority is 2. So again we got unbounded priority inheritance problem with thread 1 with raised priority=1(waiting) and thread 3 with priority=3 is running.</span> <br></div><div> >>>> Correct Behavior: after raising thread 1 priority we reEnqueue thread1 if it is waiting with the updated priority. We should immediately update the priority of thread 3 on which thread 1 waits<<<<<br><br></div><div> *problem 2:<br></div><div> When mutex is held by a thread and some other thread tries to acquire the same mutex. The updatePriority() in lock() and waitqueue.Remove() happens simultaneously. We solved this by having the global lock() for each mutex. (May be Cyrille can elaborate on this) <br></div><div><br><br></div><div class="gmail_extra"><br clear="all"><div><div><div dir="ltr">Thanks,<div><br></div><div>Saurabh Gadia</div></div></div></div><div><div class="h5">
<br><div class="gmail_quote">On Thu, Jun 25, 2015 at 5:43 PM, Saurabh Gadia <span dir="ltr"><<a href="mailto:gadia@usc.edu" target="_blank">gadia@usc.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">O(n) is the least that we can have!!<br></div><div class="gmail_extra"><br clear="all"><div><div><div dir="ltr">Thanks,<div><br></div><div>Saurabh Gadia</div></div></div></div><div><div>
<br><div class="gmail_quote">On Thu, Jun 25, 2015 at 5:39 PM, Saurabh Gadia <span dir="ltr"><<a href="mailto:gadia@usc.edu" target="_blank">gadia@usc.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>I mean not recursion. just looping till top of stack of held mutexes. <br></div>check this out: <a href="https://github.com/saurabhgadia4/lock-model/blob/master/rtems/Mutex.java" target="_blank">https://github.com/saurabhgadia4/lock-model/blob/master/rtems/Mutex.java</a> <br>-->updateRecPriority() line 143<br></div><div class="gmail_extra"><br clear="all"><div><div><div dir="ltr">Thanks,<div><br></div><div>Saurabh Gadia</div></div></div></div><div><div>
<br><div class="gmail_quote">On Thu, Jun 25, 2015 at 5:35 PM, Joel Sherrill <span dir="ltr"><<a href="mailto:joel.sherrill@oarcorp.com" target="_blank">joel.sherrill@oarcorp.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span><br>
<br>
On June 25, 2015 7:33:24 PM CDT, Saurabh Gadia <<a href="mailto:gadia@usc.edu" target="_blank">gadia@usc.edu</a>> wrote:<br>
>@joel: It essentially means finding out the highest priority thread (or<br>
>ceiling) of any remaining mutex held, right?<br>
><br>
>If you are talking this in context of nested mutex problem ---><br>
><br>
>We never go and find the highest priority thread waiting on held mutex.<br>
>Inversely highest priority thread will recursively make changes to top<br>
>of stack of held mutexes by the holder. We don't search.<br>
<br>
</span>True recursion? Or implemented with loop?<br>
<br>
Ok. Still an O(n) operation. Discuss locking on devel@<br>
<div><div><br>
><br>
>Thanks,<br>
><br>
><br>
>Saurabh Gadia<br>
><br>
><br>
>On Thu, Jun 25, 2015 at 5:27 PM, Joel Sherrill<br>
><<a href="mailto:joel.sherrill@oarcorp.com" target="_blank">joel.sherrill@oarcorp.com</a>> wrote:<br>
><br>
><br>
><br>
>On June 25, 2015 7:23:06 PM CDT, Cyrille Artho<br>
><<a href="mailto:cyrille.artho@gmail.com" target="_blank">cyrille.artho@gmail.com</a>> wrote:<br>
>>Hi Gedare and all,<br>
>>Good news:<br>
>>In this morning's discussion, we fixed some remaining issues in the<br>
>>model and now got results.<br>
>><br>
>>The current model shows a case of priority inversion with a simple<br>
>>priority reset mechanism and the absence of that problem with the<br>
>>proposed strategy.<br>
>><br>
>>There is only one flaw: We had to use a global lock in the model<br>
>>(essentially making the entire "lock" and "unlock" operations atomic),<br>
>>because so far we couldn't find a way to secure lock the operations on<br>
>>a lower level. The model uses a list of locks held (by a thread) and a<br>
>>list of threads waiting on a lock (in the lock data structure), so<br>
>>these aspects are essentially cross-cutting and very hard to get right<br>
>>without a global lock.<br>
>><br>
>>The RTEMS code uses different macros inside these operations (_Seize,<br>
>>_Surrender), which do not seem to use a global lock (but I may have<br>
>>missed something). It is possible that a data race occurs for a<br>
>>complex lock update strategy, if we don't use a global lock.<br>
>><br>
>>Saurabh will clean up the model and provide documentation with<br>
>details.<br>
>><br>
>>In the meantime, can you maybe answer these questions for us:<br>
>><br>
>>(1) Did you have any issues with race conditions in the current RTEMS<br>
>>code (especially recent changes)?<br>
><br>
>We haven't observed any but that doesn't mean there aren't any<br>
>undiscovered.<br>
><br>
>>(2) Is a fix using a global lock acceptable? Correctness-wise it's the<br>
>>safest thing, but of course performance-wise it's a bottleneck.<br>
><br>
>The locking strategy needs to be discussed. On a uniprocessor system,<br>
>these issues tend to be minor and cause delays. On smp systems, they<br>
>become more complex and performance problems.<br>
><br>
>It essentially means finding out the highest priority thread (or<br>
>ceiling) of any remaining mutex held, right?<br>
><br>
>Bring this up on devel@<br>
><br>
><br>
>--joel<br>
<br>
</div></div><span><font color="#888888">--joel<br>
</font></span></blockquote></div><br></div></div></div>
</blockquote></div><br></div></div></div>
</blockquote></div><br></div></div></div></div>
</blockquote></div><br></div>