Patch for nested mutex project solving priority inversion problem for uniprocessor.

Sebastian Huber sebastian.huber at embedded-brains.de
Mon Sep 14 06:08:48 UTC 2015


On 14/09/15 02:24, Cyrille Artho wrote:
> Hi Sebastian,
> The example you linked to uses file locking meshed with mutexes. If
> you consider them to be just locks, then you are right that their
> accesses (acquire/release) are interleaved in an unordered way.

The locks are mutexes in Newlib.

>
> For unordered lock releases, we would have to decide about the
> semantics of stepping down the priority. If we have "acquire(a),
> acquire(b), release(a), release(b)", then the priority is only stepped
> down on the final release. If the releases are ordered, we may want to
> step down the priority when b is release first, and then again when a
> is released. Or we could choose not to step down the priority at all
> until the final lock release.

I would use the dependency tree to determine the highest priority a 
thread can use. So, if you release(a) first, then the thread looses all 
the priorities inherited by a.

See also (only the picture, this handler should go away, since it has 
severe performance problems):

https://docs.rtems.org/doxygen/cpukit/html/group__ScoreResource.html#details

>
> The latter is easier to implement but may result in unexpected(?)
> behavior when locks are nested.
>
> As for the scope of this project, larger design changes (and their
> verification) will take a lot of time. To conclude this project, how
> about changing the recursion to iteration so we have an implementation
> that works for uniprocessor systems with nested locking/unlocking?

I would prefer to implement something with a potential longer life-time 
in the sources. The "Possible Implementation" described in 
https://devel.rtems.org/ticket/2412 may be good enough to implement OMIP 
later. The main difference to your approach is that the data structures 
are entirely contained in the thread control blocks. This is better in 
terms of the overall memory demand and hides implementation details from 
the mutex structure. Due to the per thread priority queue 
Thread_Priority_node::Inherited_priorities the ordering of mutex 
operations (acquire, release, timeout) doesn't matter.

With respect to the verification you can only verify a subset, e.g. 
consider only ordered acquire/release sequences if this helps.

>
> On Fri, Sep 11, 2015 at 4:18 PM, Sebastian Huber
> <sebastian.huber at embedded-brains.de> wrote:
>> Hello Saurabh,
>>
>> On 11/09/15 02:14, Saurabh Gadia wrote:
>>> Hi Sebastian,
>>>
>>> Sorry for late reply. I was out of town and could not reply. I am bit
>>> confused with above description which I need to get clarified:
>>>
>>> 1. replaced LIFO with sorted list:  I don't change the mutex order to make
>>> the list sorted. Instead I promote the priorities so that we don't need to
>>> change position of mutex in the chain and the list without any swapping of
>>> mutex in chain becomes sorted and valid.
>>>
>>>
>>> 2. I was confused with spsem03 test cases. So I created spsem04 test case
>>> which demonstrates that our implementation supports horizontal
>>> implementation. It has the same scenario of test case 3 but now we can say
>>> that horizontal nesting works correctly. you can find all description about
>>> this test cases on link below.
>>>
>>>
>>> https://docs.google.com/document/d/1RM427RKSoyrZrchEcDIx4etmtWqrFArzRRLWqcU8RK0/edit?usp=sharing
>>
>> sorry, I should have looked more carefully at your code. I missed the direct
>> recursion in _Thread_Update_Priority_UP() during my first review. A
>> limitless direct or indirect recursion a this level is absolutely
>> unacceptable since this may corrupt the thread stack. This is not a real
>> problem since it would be easy to change this into an iteration.
>>
>> So the basic structure of your solution is similar to what I suggested in
>> the ticket.
>>
>>> 3. Your mentioned example is definitely supported by the our
>>> implementation.
>>>
>>> I also need clarification on ticket 2412:
>>>
>>> 1. Is there something wrong with problem description?
>>>      it says - "The allocator mutex is already owned, so the priority of
>>> the low priority thread is raised to the priority of the high priority
>>> thread. The memory allocation completes and the allocator mutex is released,
>>> since the low priority thread still owns the file system instance mutex it
>>> continues to execute with the high priority (the high priority thread is not
>>> scheduled)"
>>>
>>> Instead --> after releasing lock over allocator the low priority threads
>>> priority will no longer be high as there is restoration of priority of on
>>> mutex release. This is the behavior with our implementation. So low priority
>>> thread will again be restored at its original priority after releasing
>>> allocator lock.
>>>
>>> *I am not able to get what do we have to do in this ticket like what is
>>> the problem we are trying to solve(objective, current behavior)
>>> *
>>
>> The problem description refers to the current master without your patch.
>>
>>> *
>>> *
>>> 2. Our implementation is only for strict order mutex behavior and will not
>>> work for unordered mutex operations. We will need complete different
>>> implementation for unordered mutex operation.
>>
>> Unfortunately we have unordered mutex operations in the real world, e.g.
>>
>> https://sourceware.org/git/gitweb.cgi?p=newlib-cygwin.git;a=blob;f=newlib/libc/stdio/fclose.c;h=cd271c490b2652db71727c9af8a19a14f3b36336;hb=HEAD#l112
>>
>> This is why I suggested to use one priority queue per thread that aggregates
>> all the priorities a thread has access to due to its owned mutexes (directly
>> and indirectly). In addition this avoids additional storage space in the
>> mutex object, which is important.
>>
>>> 3. Our SMP implementation has recursive mutex acquisition behavior for
>>> supporting horizontal nesting? Will it be fine with RTEMS?
>>
>> There are several race conditions on SMP configurations in your code, e.g.
>>
>>      _Thread_queue_Release( &the_mutex->Wait_queue, lock_context );
>>      _Thread_Change_priority_UP( holder, the_mutex,
>> executing->current_priority, false);
>>
>> will not work on SMP, since the holder may have changed after the
>> Thread_queue_Release().
>>
>> A recursive SMP lock acquire is a bit problematic, since you need a context
>> for each acquire/release pair. There are some options to solve this problem,
>> but we have to make some trade-offs. Recursive mutex acquire may lead to
>> deadlocks, so we must take care that we don't get deadlocks, since this
>> would result in an infinite loop with interrupts disabled.
>>
>>> note: *Cyrille* please mention any other doubts if I have missed any.
>>>
>>> Thanks,
>>>
>>> Saurabh Gadia
>>>
>>> On Sun, Sep 6, 2015 at 11:04 PM, Sebastian Huber
>>> <sebastian.huber at embedded-brains.de
>>> <mailto:sebastian.huber at embedded-brains.de>> wrote:
>>>
>>>      Hello Saurabh,
>>>
>>>      On 05/09/15 01:52, Saurabh Gadia wrote:
>>>
>>>          This is the patch for solving priority inversion problem for
>>>          uniprocessor architecture. It works correctly for all test
>>>          cases on master. For 4.11 the patch get applied cleanly but
>>>          the branch does not compile because of some rbtree
>>>          error(unrelated to project). Github link:
>>>          https://github.com/saurabhgadia4/rtems/tree/nested-mutex
>>>
>>>
>>>      I reviewed your patch. Basically you replaced the LIFO list of
>>>      priorities with a sorted list? Does it work with timeouts and
>>>      external priority changes (e.g. task A blocks on a mutex owned by
>>>      O, another task B raises the priority of A, will this raise the
>>>      priority of O?)
>>>
>>>      Since all tests pass, test sptests/spsem03 passes, which shows
>>>      that your implementation doesn't support horizontal nesting.
>>>
>>>      There is no deadlock detection.
>>>
>>>      Please have a look at:
>>>
>>>      https://devel.rtems.org/ticket/2412
>>>
>>>      I think the suggested implementation would even work on SMP
>>>      systems quite well.
>>>
>>>      --     Sebastian Huber, embedded brains GmbH
>>>
>>>      Address : Dornierstr. 4, D-82178 Puchheim, Germany
>>>      Phone   : +49 89 189 47 41-16 <tel:%2B49%2089%20189%2047%2041-16>
>>>      Fax     : +49 89 189 47 41-09 <tel:%2B49%2089%20189%2047%2041-09>
>>>      E-Mail  : sebastian.huber at embedded-brains.de
>>>      <mailto:sebastian.huber at embedded-brains.de>
>>>      PGP     : Public key available on request.
>>>
>>>      Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>>>
>>>
>> --
>> Sebastian Huber, embedded brains GmbH
>>
>> Address : Dornierstr. 4, D-82178 Puchheim, Germany
>> Phone   : +49 89 189 47 41-16
>> Fax     : +49 89 189 47 41-09
>> E-Mail  : sebastian.huber at embedded-brains.de
>> PGP     : Public key available on request.
>>
>> Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>>
>
>

-- 
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax     : +49 89 189 47 41-09
E-Mail  : sebastian.huber at embedded-brains.de
PGP     : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.




More information about the devel mailing list