Patch for nested mutex project solving priority inversion problem for uniprocessor.

Sebastian Huber sebastian.huber at embedded-brains.de
Fri Sep 11 07:18:13 UTC 2015


Hello Saurabh,

On 11/09/15 02:14, Saurabh Gadia wrote:
> Hi Sebastian,
>
> Sorry for late reply. I was out of town and could not reply. I am bit 
> confused with above description which I need to get clarified:
>
> 1. replaced LIFO with sorted list:  I don't change the mutex order to 
> make the list sorted. Instead I promote the priorities so that we 
> don't need to change position of mutex in the chain and the list 
> without any swapping of mutex in chain becomes sorted and valid.
>
>
> 2. I was confused with spsem03 test cases. So I created spsem04 test 
> case which demonstrates that our implementation supports horizontal 
> implementation. It has the same scenario of test case 3 but now we can 
> say that horizontal nesting works correctly. you can find all 
> description about this test cases on link below.
>
> https://docs.google.com/document/d/1RM427RKSoyrZrchEcDIx4etmtWqrFArzRRLWqcU8RK0/edit?usp=sharing

sorry, I should have looked more carefully at your code. I missed the 
direct recursion in _Thread_Update_Priority_UP() during my first review. 
A limitless direct or indirect recursion a this level is absolutely 
unacceptable since this may corrupt the thread stack. This is not a real 
problem since it would be easy to change this into an iteration.

So the basic structure of your solution is similar to what I suggested 
in the ticket.

>
> 3. Your mentioned example is definitely supported by the our 
> implementation.
>
> I also need clarification on ticket 2412:
>
> 1. Is there something wrong with problem description?
>     it says - "The allocator mutex is already owned, so the priority 
> of the low priority thread is raised to the priority of the high 
> priority thread. The memory allocation completes and the allocator 
> mutex is released, since the low priority thread still owns the file 
> system instance mutex it continues to execute with the high priority 
> (the high priority thread is not scheduled)"
>
> Instead --> after releasing lock over allocator the low priority 
> threads priority will no longer be high as there is restoration of 
> priority of on mutex release. This is the behavior with our 
> implementation. So low priority thread will again be restored at its 
> original priority after releasing allocator lock.
>
> *I am not able to get what do we have to do in this ticket like what 
> is the problem we are trying to solve(objective, current behavior)
> *

The problem description refers to the current master without your patch.

> *
> *
> 2. Our implementation is only for strict order mutex behavior and will 
> not work for unordered mutex operations. We will need complete 
> different implementation for unordered mutex operation.

Unfortunately we have unordered mutex operations in the real world, e.g.

https://sourceware.org/git/gitweb.cgi?p=newlib-cygwin.git;a=blob;f=newlib/libc/stdio/fclose.c;h=cd271c490b2652db71727c9af8a19a14f3b36336;hb=HEAD#l112

This is why I suggested to use one priority queue per thread that 
aggregates all the priorities a thread has access to due to its owned 
mutexes (directly and indirectly). In addition this avoids additional 
storage space in the mutex object, which is important.

>
> 3. Our SMP implementation has recursive mutex acquisition behavior for 
> supporting horizontal nesting? Will it be fine with RTEMS?

There are several race conditions on SMP configurations in your code, e.g.

     _Thread_queue_Release( &the_mutex->Wait_queue, lock_context );
     _Thread_Change_priority_UP( holder, the_mutex, 
executing->current_priority, false);

will not work on SMP, since the holder may have changed after the 
Thread_queue_Release().

A recursive SMP lock acquire is a bit problematic, since you need a 
context for each acquire/release pair. There are some options to solve 
this problem, but we have to make some trade-offs. Recursive mutex 
acquire may lead to deadlocks, so we must take care that we don't get 
deadlocks, since this would result in an infinite loop with interrupts 
disabled.

>
> note: *Cyrille* please mention any other doubts if I have missed any.
>
> Thanks,
>
> Saurabh Gadia
>
> On Sun, Sep 6, 2015 at 11:04 PM, Sebastian Huber 
> <sebastian.huber at embedded-brains.de 
> <mailto:sebastian.huber at embedded-brains.de>> wrote:
>
>     Hello Saurabh,
>
>     On 05/09/15 01:52, Saurabh Gadia wrote:
>
>         This is the patch for solving priority inversion problem for
>         uniprocessor architecture. It works correctly for all test
>         cases on master. For 4.11 the patch get applied cleanly but
>         the branch does not compile because of some rbtree
>         error(unrelated to project). Github link:
>         https://github.com/saurabhgadia4/rtems/tree/nested-mutex
>
>
>     I reviewed your patch. Basically you replaced the LIFO list of
>     priorities with a sorted list? Does it work with timeouts and
>     external priority changes (e.g. task A blocks on a mutex owned by
>     O, another task B raises the priority of A, will this raise the
>     priority of O?)
>
>     Since all tests pass, test sptests/spsem03 passes, which shows
>     that your implementation doesn't support horizontal nesting.
>
>     There is no deadlock detection.
>
>     Please have a look at:
>
>     https://devel.rtems.org/ticket/2412
>
>     I think the suggested implementation would even work on SMP
>     systems quite well.
>
>     -- 
>     Sebastian Huber, embedded brains GmbH
>
>     Address : Dornierstr. 4, D-82178 Puchheim, Germany
>     Phone   : +49 89 189 47 41-16 <tel:%2B49%2089%20189%2047%2041-16>
>     Fax     : +49 89 189 47 41-09 <tel:%2B49%2089%20189%2047%2041-09>
>     E-Mail  : sebastian.huber at embedded-brains.de
>     <mailto:sebastian.huber at embedded-brains.de>
>     PGP     : Public key available on request.
>
>     Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>
>

-- 
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax     : +49 89 189 47 41-09
E-Mail  : sebastian.huber at embedded-brains.de
PGP     : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.




More information about the devel mailing list