[Bug 1814] SMP race condition between stack free and dispatch

bugzilla-daemon at rtems.org bugzilla-daemon at rtems.org
Wed Jun 15 17:35:42 UTC 2011


https://www.rtems.org/bugzilla/show_bug.cgi?id=1814

--- Comment #3 from Marta Rybczynska <marta.rybczynska at kalray.eu> 2011-06-15 12:35:40 CDT ---
(In reply to comment #2)
> (In reply to comment #1)
> > I think this patch is not quite right.
> > 
> > Thread_Close executes with dispatch disabled when it is called from
> > rtems_task_delete. Before freeing the stack, the thread is blocked by setting
> > its state to dormant, which eventually calls Scheduler_Block.
> > 
> > I don't see how the thread's stack is used between when it is closed and any
> > further dispatch.
> > 
> > Do you have a bug that is causing this behavior to be seen?
> 
> This is the code from rtems_task_delete() so I lean to agreeing with Gedare.
> 
>       _Thread_Close( the_information, the_thread );
> 
>       _RTEMS_tasks_Free( the_thread );
> 
>       _RTEMS_Unlock_allocator();
>       _Thread_Enable_dispatch();

I'm talking about *SMP*, guys. :)

We have processor 1 that is finishing a thread and processor 2 waiting on the
allocator mutex. As soon as processor 1 unlocks the allocator mutex, processor
2 grabs it and starts allocating memory as allocation is not protected by ISR
disable nor the dispatch lock. It may allocate the stack of the thread on
processor 1. Of course, processor 2 may immediately start using the memory
overwriting the values on processor 1 stack.

In the mean-time, processor 2 arrives to Thread_Dispatch, but it is using the
already freed stack.

To Gedare: the actual situation where the bug showed itself was using 3
processors and the memory got re-allocated as a stack of process 3, which even
managed to start before process 1 got finally descheduled...

> 
> Now on the other hand, looking at this pattern, I see that the lock allocator
> and _Thread_Get (implicit disable dispatching) are in the wrong order with the
> undoing at the other end of the method.
> 
> Lock allocator
>   thread get disable dispatch
>    ...
>   Unlock allocator
> Enable dispatch
> 
> That seems wrong but offhand, I don't know which side to change.  The
> side-effects of disabling dispatching before getting the allocator mutex
> worry me.
> 

I was looking at it seriously, becase "Why it works on non-SMP?" This is in
fact explained in your comment in the function...

> If you change the unlock order, then you do have the issue.
> 
> malloc() has a deferred free queue.  I wonder if this capability should be
> moved to the heap.  Then we can just defer the free of the stack memory in
> this case.  The next memory alloc/dealloc can perform the operation just like
> we defer free's from ISRs.

Yes, this is another solution.

-- 
Configure bugmail: https://www.rtems.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are watching all bug changes.



More information about the bugs mailing list