<div dir="ltr">O(n) is the least that we can have!!<br></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr">Thanks,<div><br></div><div>Saurabh Gadia</div></div></div></div>
<br><div class="gmail_quote">On Thu, Jun 25, 2015 at 5:39 PM, Saurabh Gadia <span dir="ltr"><<a href="mailto:gadia@usc.edu" target="_blank">gadia@usc.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>I mean not recursion. just looping till top of stack of held mutexes. <br></div>check this out: <a href="https://github.com/saurabhgadia4/lock-model/blob/master/rtems/Mutex.java" target="_blank">https://github.com/saurabhgadia4/lock-model/blob/master/rtems/Mutex.java</a> <br>-->updateRecPriority() line 143<br></div><div class="gmail_extra"><br clear="all"><div><div><div dir="ltr">Thanks,<div><br></div><div>Saurabh Gadia</div></div></div></div><div><div class="h5">
<br><div class="gmail_quote">On Thu, Jun 25, 2015 at 5:35 PM, Joel Sherrill <span dir="ltr"><<a href="mailto:joel.sherrill@oarcorp.com" target="_blank">joel.sherrill@oarcorp.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><br>
<br>
On June 25, 2015 7:33:24 PM CDT, Saurabh Gadia <<a href="mailto:gadia@usc.edu" target="_blank">gadia@usc.edu</a>> wrote:<br>
>@joel: It essentially means finding out the highest priority thread (or<br>
>ceiling) of any remaining mutex held, right?<br>
><br>
>If you are talking this in context of nested mutex problem ---><br>
><br>
>We never go and find the highest priority thread waiting on held mutex.<br>
>Inversely highest priority thread will recursively make changes to top<br>
>of stack of held mutexes by the holder. We don't search.<br>
<br>
</span>True recursion? Or implemented with loop?<br>
<br>
Ok. Still an O(n) operation. Discuss locking on devel@<br>
<div><div><br>
><br>
>Thanks,<br>
><br>
><br>
>Saurabh Gadia<br>
><br>
><br>
>On Thu, Jun 25, 2015 at 5:27 PM, Joel Sherrill<br>
><<a href="mailto:joel.sherrill@oarcorp.com" target="_blank">joel.sherrill@oarcorp.com</a>> wrote:<br>
><br>
><br>
><br>
>On June 25, 2015 7:23:06 PM CDT, Cyrille Artho<br>
><<a href="mailto:cyrille.artho@gmail.com" target="_blank">cyrille.artho@gmail.com</a>> wrote:<br>
>>Hi Gedare and all,<br>
>>Good news:<br>
>>In this morning's discussion, we fixed some remaining issues in the<br>
>>model and now got results.<br>
>><br>
>>The current model shows a case of priority inversion with a simple<br>
>>priority reset mechanism and the absence of that problem with the<br>
>>proposed strategy.<br>
>><br>
>>There is only one flaw: We had to use a global lock in the model<br>
>>(essentially making the entire "lock" and "unlock" operations atomic),<br>
>>because so far we couldn't find a way to secure lock the operations on<br>
>>a lower level. The model uses a list of locks held (by a thread) and a<br>
>>list of threads waiting on a lock (in the lock data structure), so<br>
>>these aspects are essentially cross-cutting and very hard to get right<br>
>>without a global lock.<br>
>><br>
>>The RTEMS code uses different macros inside these operations (_Seize,<br>
>>_Surrender), which do not seem to use a global lock (but I may have<br>
>>missed something). It is possible that a data race occurs for a<br>
>>complex lock update strategy, if we don't use a global lock.<br>
>><br>
>>Saurabh will clean up the model and provide documentation with<br>
>details.<br>
>><br>
>>In the meantime, can you maybe answer these questions for us:<br>
>><br>
>>(1) Did you have any issues with race conditions in the current RTEMS<br>
>>code (especially recent changes)?<br>
><br>
>We haven't observed any but that doesn't mean there aren't any<br>
>undiscovered.<br>
><br>
>>(2) Is a fix using a global lock acceptable? Correctness-wise it's the<br>
>>safest thing, but of course performance-wise it's a bottleneck.<br>
><br>
>The locking strategy needs to be discussed. On a uniprocessor system,<br>
>these issues tend to be minor and cause delays. On smp systems, they<br>
>become more complex and performance problems.<br>
><br>
>It essentially means finding out the highest priority thread (or<br>
>ceiling) of any remaining mutex held, right?<br>
><br>
>Bring this up on devel@<br>
><br>
><br>
>--joel<br>
<br>
</div></div><span><font color="#888888">--joel<br>
</font></span></blockquote></div><br></div></div></div>
</blockquote></div><br></div>