Is there a data race problem while acquiring a mutex with priority inheritance protocol

Saurabh Gadia gadia at usc.edu
Fri Jul 24 02:41:47 UTC 2015


#if defined(RTEMS_SMP)
RTEMS_INLINE_ROUTINE void _Thread_Lock_restore_default(
  Thread_Control *the_thread
)
{
  _Atomic_Fence( ATOMIC_ORDER_RELEASE );

  _Thread_Lock_set_unprotected( the_thread, &the_thread->Lock.Default );
}

what does this atomic fence does when we set Lock.current=Default in
threadimpl.h?

Thanks,

Saurabh Gadia

On Thu, Jul 23, 2015 at 7:38 PM, Saurabh Gadia <gadia at usc.edu> wrote:

> Yes. I guess I was right about the data race problem we are facing in our
> model. It is also present in rtems:
>     _Thread_queue_Extract_locked( &the_mutex->Wait_queue, the_thread );
> line 188 in _CORE_mutex_Surrender() in coremutexsurrender.c where it sets
> holder.Lock->current = holder.Lock->Default. If you get time please have a
> look at it.
>
> Thanks,
>
> Saurabh Gadia
>
> On Thu, Jul 23, 2015 at 7:26 PM, Saurabh Gadia <gadia at usc.edu> wrote:
>
>> Actually there might exists the same data race problem now in rtems. i.e
>> when holder!=NULL we acquire holder.Lock->current in 2nd code snippet: lock
>> = _Thread_Lock_acquire( the_thread, &lock_context );
>> but if mutex on which holder is waiting gets assigned to holder then
>> current value of holder changes to default lock!! Lets check that.
>>
>> Thanks,
>>
>> Saurabh Gadia
>>
>> On Thu, Jul 23, 2015 at 7:21 PM, Saurabh Gadia <gadia at usc.edu> wrote:
>>
>>> So based on this structure:
>>> typedef struct {
>>>   /**
>>>    * @brief The current thread lock.
>>>    */
>>>   ISR_lock_Control *current;
>>>
>>>   /**
>>>    * @brief The default thread lock in case the thread is not blocked on
>>> a
>>>    * resource.
>>>    */
>>>   ISR_lock_Control Default;
>>>
>>>   /**
>>>    * @brief Generation number to invalidate stale locks.
>>>    */
>>>   Atomic_Uint generation;
>>> } Thread_Lock_control;
>>>
>>> present in TCB is same as having holder.trylock corresponding to *current
>>> *in above structure when holder is waiting. So need to check what rtems
>>> does to avoid data race for this *current member* when it becomes
>>> Default after holder gets access to waiting resource.
>>>
>>> Thanks,
>>>
>>> Saurabh Gadia
>>>
>>> On Thu, Jul 23, 2015 at 7:16 PM, Saurabh Gadia <gadia at usc.edu> wrote:
>>>
>>>> As per thread initialization in threadinitialize.c we should acquire
>>>> default lock i.e the_thread->Lock.Default. Am I right?
>>>>
>>>> Thanks,
>>>>
>>>> Saurabh Gadia
>>>>
>>>> On Thu, Jul 23, 2015 at 6:58 PM, Saurabh Gadia <gadia at usc.edu> wrote:
>>>>
>>>>> basically every time we try to acquire mutex there should be lock
>>>>> acquisition related executing thread.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Saurabh Gadia
>>>>>
>>>>> On Thu, Jul 23, 2015 at 6:47 PM, Saurabh Gadia <gadia at usc.edu> wrote:
>>>>>
>>>>>> hi,
>>>>>>
>>>>>>
>>>>>> Scenario:
>>>>>> thread t1: current_priority = 5, mutex acquired = m1, m2
>>>>>> thread t2: current_priority = 3, mutex_acquired = None
>>>>>>
>>>>>> flow: thread t1 tries to acquire mutex m3 and thread t2 tries to
>>>>>> acquire mutex m1 which is already acquired by thread t1 simultaneously on
>>>>>> SMP.
>>>>>>
>>>>>> Action:
>>>>>> thred t1 finds that m3 ->holder==NULL, so following code snippet of
>>>>>> coremuteximpl.h executes on one processor:
>>>>>>
>>>>>> /////
>>>>>> RTEMS_INLINE_ROUTINE int _CORE_mutex_Seize_interrupt_trylock_body(
>>>>>>   CORE_mutex_Control  *the_mutex,
>>>>>>   Thread_Control      *executing,
>>>>>>   ISR_lock_Context    *lock_context
>>>>>> )
>>>>>> {
>>>>>>   /* disabled when you get here */
>>>>>>
>>>>>>   executing->Wait.return_code = CORE_MUTEX_STATUS_SUCCESSFUL;
>>>>>>   if ( !_CORE_mutex_Is_locked( the_mutex ) ) {
>>>>>>     the_mutex->holder     = executing;
>>>>>>     the_mutex->nest_count = 1;
>>>>>>     if ( _CORE_mutex_Is_inherit_priority( &the_mutex->Attributes ) ||
>>>>>>          _CORE_mutex_Is_priority_ceiling( &the_mutex->Attributes ) ){
>>>>>>
>>>>>> #ifdef __RTEMS_STRICT_ORDER_MUTEX__
>>>>>> /* Doesn't this lead to data race. If the executing thread is holder
>>>>>> os some other mutex and its priority is promoted by other
>>>>>> thread
>>>>>> */
>>>>>>        _Chain_Prepend_unprotected( &executing->lock_mutex,
>>>>>>                                    &the_mutex->queue.lock_queue );
>>>>>>        *the_mutex->queue.priority_before =
>>>>>> executing->current_priority;*
>>>>>> #endif
>>>>>>
>>>>>>       executing->resource_count++;
>>>>>>     }
>>>>>>
>>>>>>     if ( !_CORE_mutex_Is_priority_ceiling( &the_mutex->Attributes ) )
>>>>>> {
>>>>>>       _Thread_queue_Release( &the_mutex->Wait_queue, lock_context );
>>>>>>       return 0;
>>>>>>     }
>>>>>>
>>>>>>
>>>>>> //////
>>>>>>
>>>>>> And thread t2 tries to acquire m1 and following snippet of
>>>>>> threadchangepriority.c executes:
>>>>>>
>>>>>> ///////  (the_thread is holder for mutex m1 and new_priority ==
>>>>>> priority of thread t2 ==3)
>>>>>>  lock = _Thread_Lock_acquire( the_thread, &lock_context );
>>>>>>
>>>>>>   /*
>>>>>>    * For simplicity set the priority restore hint unconditionally
>>>>>> since this is
>>>>>>    * an average case optimization.  Otherwise complicated atomic
>>>>>> operations
>>>>>>    * would be necessary.  Synchronize with a potential read of the
>>>>>> resource
>>>>>>    * count in the filter function.  See also _CORE_mutex_Surrender(),
>>>>>>    * _Thread_Set_priority_filter() and
>>>>>> _Thread_Restore_priority_filter().
>>>>>>    */
>>>>>>   the_thread->priority_restore_hint = true;
>>>>>>   _Atomic_Fence( ATOMIC_ORDER_ACQ_REL );
>>>>>>
>>>>>>   /*
>>>>>>    *  Do not bother recomputing all the priority related information
>>>>>> if
>>>>>>    *  we are not REALLY changing priority.
>>>>>>    */
>>>>>>   if ( ( *filter )( the_thread, &new_priority, arg ) ) {
>>>>>>     uint32_t my_generation;
>>>>>>
>>>>>>     my_generation = the_thread->priority_generation + 1;
>>>>>>    * the_thread->current_priority = new_priority;*
>>>>>>     the_thread->priority_generation = my_generation;
>>>>>>
>>>>>>     ( *the_thread->Wait.operations->priority_change )(
>>>>>>       the_thread,
>>>>>>       new_priority,
>>>>>>       the_thread->Wait.queue
>>>>>>     );
>>>>>> ////
>>>>>>
>>>>>> So can interleaving of highlighted code result in data race? From my
>>>>>> perspective and from JPF model experience this is a data race and we need
>>>>>> to have locking on executing thread and holder thread if it exists for
>>>>>> mutex while acquiring any mutex.
>>>>>> So for 2nd case when there is already holder we do acquire holder
>>>>>> lock by calling *lock = _Thread_Lock_acquire( the_thread,
>>>>>> &lock_context ); *in 2nd snippet but we should do the same on
>>>>>> executing thread when holder==NULL in 1st code snippet.
>>>>>> Am I right? or is there something I am missing?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Saurabh Gadia
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20150723/1dd8c7da/attachment-0002.html>


More information about the devel mailing list