Patch for nested mutex project solving priority inversion problem for uniprocessor.

Sebastian Huber sebastian.huber at embedded-brains.de
Fri Sep 11 07:26:33 UTC 2015


On 11/09/15 02:25, Cyrille Artho wrote:
> Dear all,
> Please let me try to clarify:
>
> 1. Our understanding so far was that upon lock release, the priority
> instantly gets reverted back to the previous priority. This also
> applies for nested locks, i.e., a higher priority gets reverted
> (stepwise) when releasing locks and does NOT persist until the last
> lock is released.

Yes.

>
> 2. The current implementation relies on ordering but could perhaps be
> changed, although this would require some time. The time needed to
> verify the model would also increase substantially because unordered
> locking creates a lot of new possibilities. (It could take several
> days to verify a reasonable subset of all behaviors on multiple
> cores.)

Unfortunately we have unordered mutex usage, e.g. in Newlib.

>
> 3. We have a working SMP implementation but it is very complex. It
> either has to use a lot of different locks or a global lock for some
> of the code. We were not able to eliminate any locks (the model shows
> that data races result if we try), so the current algorithm seems to
> be very complex for a multi-processor setting.

I think with the thread lock it will be moderately easy to implement. 
Its certainly not done in two weeks.

>
> Other points:
>
> 4. We could add deadlock detection but it does NOT automatically come
> with horizontal nesting (acquiring multiple locks). When acquiring
> multiple locks, we check the availability of the next lock, but we do
> not go down the entire lock chain. This could be implemented but would
> cause additional overhead.

The general case deadlock detection is optional from my point of view, 
but at operating system level (SMP locks) there must not be a deadlock.

>
> 5. Timeout is currently not supported but could be implemented
> relatively easily, although it would again take time to verify the
> change.

Timeouts may complicate the things a lot. I tried to get rid of them, 
but they were desired by the community.

>
> 6. We could try to build a specialized data race detector RTEMS, to
> show a data race in a version that uses too few locks. However,
> building a data race detector that gives useful diagnostic information
> takes some time. Adding data race detection that just outputs "data
> race" when the common lock set is empty, is easier.
>
> On Fri, Sep 11, 2015 at 9:14 AM, Saurabh Gadia <gadia at usc.edu> wrote:
>> Hi Sebastian,
>>
>> Sorry for late reply. I was out of town and could not reply. I am bit
>> confused with above description which I need to get clarified:
>>
>> 1. replaced LIFO with sorted list:  I don't change the mutex order to make
>> the list sorted. Instead I promote the priorities so that we don't need to
>> change position of mutex in the chain and the list without any swapping of
>> mutex in chain becomes sorted and valid.
>>
>>
>> 2. I was confused with spsem03 test cases. So I created spsem04 test case
>> which demonstrates that our implementation supports horizontal
>> implementation. It has the same scenario of test case 3 but now we can say
>> that horizontal nesting works correctly. you can find all description about
>> this test cases on link below.
>>
>> https://docs.google.com/document/d/1RM427RKSoyrZrchEcDIx4etmtWqrFArzRRLWqcU8RK0/edit?usp=sharing
>>
>> 3. Your mentioned example is definitely supported by the our implementation.
>>
>> I also need clarification on ticket 2412:
>>
>> 1. Is there something wrong with problem description?
>>      it says - "The allocator mutex is already owned, so the priority of the
>> low priority thread is raised to the priority of the high priority thread.
>> The memory allocation completes and the allocator mutex is released, since
>> the low priority thread still owns the file system instance mutex it
>> continues to execute with the high priority (the high priority thread is not
>> scheduled)"
>>
>> Instead --> after releasing lock over allocator the low priority threads
>> priority will no longer be high as there is restoration of priority of on
>> mutex release. This is the behavior with our implementation. So low priority
>> thread will again be restored at its original priority after releasing
>> allocator lock.
>>
>> I am not able to get what do we have to do in this ticket like what is the
>> problem we are trying to solve(objective, current behavior)
>>
>> 2. Our implementation is only for strict order mutex behavior and will not
>> work for unordered mutex operations. We will need complete different
>> implementation for unordered mutex operation.
>>
>> 3. Our SMP implementation has recursive mutex acquisition behavior for
>> supporting horizontal nesting? Will it be fine with RTEMS?
>>
>> note: Cyrille please mention any other doubts if I have missed any.
>>
>> Thanks,
>>
>> Saurabh Gadia
>>
>> On Sun, Sep 6, 2015 at 11:04 PM, Sebastian Huber
>> <sebastian.huber at embedded-brains.de> wrote:
>>> Hello Saurabh,
>>>
>>> On 05/09/15 01:52, Saurabh Gadia wrote:
>>>> This is the patch for solving priority inversion problem for uniprocessor
>>>> architecture. It works correctly for all test cases on master. For 4.11 the
>>>> patch get applied cleanly but the branch does not compile because of some
>>>> rbtree error(unrelated to project). Github link:
>>>> https://github.com/saurabhgadia4/rtems/tree/nested-mutex
>>>
>>> I reviewed your patch. Basically you replaced the LIFO list of priorities
>>> with a sorted list? Does it work with timeouts and external priority changes
>>> (e.g. task A blocks on a mutex owned by O, another task B raises the
>>> priority of A, will this raise the priority of O?)
>>>
>>> Since all tests pass, test sptests/spsem03 passes, which shows that your
>>> implementation doesn't support horizontal nesting.
>>>
>>> There is no deadlock detection.
>>>
>>> Please have a look at:
>>>
>>> https://devel.rtems.org/ticket/2412
>>>
>>> I think the suggested implementation would even work on SMP systems quite
>>> well.
>>>
>>> --
>>> Sebastian Huber, embedded brains GmbH
>>>
>>> Address : Dornierstr. 4, D-82178 Puchheim, Germany
>>> Phone   : +49 89 189 47 41-16
>>> Fax     : +49 89 189 47 41-09
>>> E-Mail  : sebastian.huber at embedded-brains.de
>>> PGP     : Public key available on request.
>>>
>>> Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>>>
>
>

-- 
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax     : +49 89 189 47 41-09
E-Mail  : sebastian.huber at embedded-brains.de
PGP     : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.



More information about the devel mailing list