Is there a data race problem while acquiring a mutex with priority inheritance protocol
Saurabh Gadia
gadia at usc.edu
Fri Jul 24 01:58:02 UTC 2015
basically every time we try to acquire mutex there should be lock
acquisition related executing thread.
Thanks,
Saurabh Gadia
On Thu, Jul 23, 2015 at 6:47 PM, Saurabh Gadia <gadia at usc.edu> wrote:
> hi,
>
>
> Scenario:
> thread t1: current_priority = 5, mutex acquired = m1, m2
> thread t2: current_priority = 3, mutex_acquired = None
>
> flow: thread t1 tries to acquire mutex m3 and thread t2 tries to acquire
> mutex m1 which is already acquired by thread t1 simultaneously on SMP.
>
> Action:
> thred t1 finds that m3 ->holder==NULL, so following code snippet of
> coremuteximpl.h executes on one processor:
>
> /////
> RTEMS_INLINE_ROUTINE int _CORE_mutex_Seize_interrupt_trylock_body(
> CORE_mutex_Control *the_mutex,
> Thread_Control *executing,
> ISR_lock_Context *lock_context
> )
> {
> /* disabled when you get here */
>
> executing->Wait.return_code = CORE_MUTEX_STATUS_SUCCESSFUL;
> if ( !_CORE_mutex_Is_locked( the_mutex ) ) {
> the_mutex->holder = executing;
> the_mutex->nest_count = 1;
> if ( _CORE_mutex_Is_inherit_priority( &the_mutex->Attributes ) ||
> _CORE_mutex_Is_priority_ceiling( &the_mutex->Attributes ) ){
>
> #ifdef __RTEMS_STRICT_ORDER_MUTEX__
> /* Doesn't this lead to data race. If the executing thread is holder os
> some other mutex and its priority is promoted by other
> thread
> */
> _Chain_Prepend_unprotected( &executing->lock_mutex,
> &the_mutex->queue.lock_queue );
> *the_mutex->queue.priority_before = executing->current_priority;*
> #endif
>
> executing->resource_count++;
> }
>
> if ( !_CORE_mutex_Is_priority_ceiling( &the_mutex->Attributes ) ) {
> _Thread_queue_Release( &the_mutex->Wait_queue, lock_context );
> return 0;
> }
>
>
> //////
>
> And thread t2 tries to acquire m1 and following snippet of
> threadchangepriority.c executes:
>
> /////// (the_thread is holder for mutex m1 and new_priority == priority
> of thread t2 ==3)
> lock = _Thread_Lock_acquire( the_thread, &lock_context );
>
> /*
> * For simplicity set the priority restore hint unconditionally since
> this is
> * an average case optimization. Otherwise complicated atomic operations
> * would be necessary. Synchronize with a potential read of the resource
> * count in the filter function. See also _CORE_mutex_Surrender(),
> * _Thread_Set_priority_filter() and _Thread_Restore_priority_filter().
> */
> the_thread->priority_restore_hint = true;
> _Atomic_Fence( ATOMIC_ORDER_ACQ_REL );
>
> /*
> * Do not bother recomputing all the priority related information if
> * we are not REALLY changing priority.
> */
> if ( ( *filter )( the_thread, &new_priority, arg ) ) {
> uint32_t my_generation;
>
> my_generation = the_thread->priority_generation + 1;
> * the_thread->current_priority = new_priority;*
> the_thread->priority_generation = my_generation;
>
> ( *the_thread->Wait.operations->priority_change )(
> the_thread,
> new_priority,
> the_thread->Wait.queue
> );
> ////
>
> So can interleaving of highlighted code result in data race? From my
> perspective and from JPF model experience this is a data race and we need
> to have locking on executing thread and holder thread if it exists for
> mutex while acquiring any mutex.
> So for 2nd case when there is already holder we do acquire holder lock by
> calling *lock = _Thread_Lock_acquire( the_thread, &lock_context ); *in
> 2nd snippet but we should do the same on executing thread when holder==NULL
> in 1st code snippet.
> Am I right? or is there something I am missing?
>
> Thanks,
>
> Saurabh Gadia
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20150723/17307495/attachment-0002.html>
More information about the devel
mailing list