Executing Thread Migrating Due to Affinity Change

Joel Sherrill joel.sherrill at OARcorp.com
Fri May 30 14:43:07 UTC 2014

On 5/30/2014 8:33 AM, Sebastian Huber wrote:
> On 05/29/2014 09:28 PM, Joel Sherrill wrote:
>> Hi
>> The priority affinity algorithm appears to be behaving as
>> we expect from a decision making standpoint. However,
>> Jennifer and I think that when a scheduled thread must
>> be migrated to another core, we have a case for a new
>> state in the Thread Life Cycle.
> I hope we can avoid this.  Which problem do you want to address with 
> this new state?
We have some tests that are not behaving on grsim as we expect.
I was wondering if migrating the executing thread to another core
would result in some unexpected weirdness. Because of [1], we
were in desk check and thinking mode to find the problem.

[1] grsim+gdb on multi-core configurations results in an assert
in RTEMS if you run with a breakpoint set.   You don't have to
continue -- just "tar remote :2222; load; b XXX; cont" and you
get an assert. Daniel H. is looking into an example for us.
But for now, we have no real visibility on leon3. We are in
the process of switching to the realview on qemu to debug.
>> I am thinking that the thread needs to have a blocking state
>> set, have its context saved and be taken out of the scheduled
>> set. Then a life cycle state change handler van run as an
>> extension to unblock it so it can be potentially scheduled to
>> execute on another processor.
> The scheduler is not responsible for the thread context.  This is 
> _Thread_Dispatch().  A post-switch handler can only do actions for the 
> executing thread.  It would be extremely difficult to perform actions on 
> behalf of another thread. 
I was thinking that. I couldn't see how it would even work reliably.
Plus we are
already technically blocking and unblocking the thread so it should be
from scheduled to ready and back.
>  We have to keep also fine grained locking 
> into account.  The _Thread_Dispatch() function is on the critical path 
> for average and worst-case performance, so we should keep it as simple 
> as possible.
I agree.
> As I wrote already in another thread you can use something like this to 
> allocate an exact processor for a thread:
I suppose I should have posted the patch for review by now but I was hoping
to have it completely working with tests when posted.  Anyway, it is
attached. The key is that I modified the priority SMP to let
let get_lowest_scheduled() and get_highest_ready() both be passed in
from higher levels.

The selection of get_lowest_scheduled() and get_highest_ready()
include the option of a filter thread.  The filter thread is used
as to check that the victim's CPU is in the potential heir's affinity

I think the logic works out so that the normal calls to
_Scheduler_SMP_Allocate_processor() have selected the
correct thread/CPU.

I don't see the extra logic in _Scheduler_SMP_Allocate_processor()
having a negative impact. What do you see that I might be missing?
> static inline void _Scheduler_SMP_Allocate_processor_exact(
>    Scheduler_SMP_Context *self,
>    Thread_Control *scheduled,
>    Thread_Control *victim
> )
> {
>    Scheduler_SMP_Node *scheduled_node = _Scheduler_SMP_Node_get( 
> scheduled );
>    Per_CPU_Control *cpu_of_scheduled = _Thread_Get_CPU( scheduled );
>    Per_CPU_Control *cpu_of_victim = _Thread_Get_CPU( victim );
>    Per_CPU_Control *cpu_self = _Per_CPU_Get();
>    _Scheduler_SMP_Node_change_state(
>      scheduled_node,
>    );
>    _Thread_Set_CPU( scheduled, cpu_of_victim );
>    _Scheduler_SMP_Update_heir( cpu_self, cpu_of_victim, scheduled );
> }
> You can even use this function to do things like this:
> _Scheduler_SMP_Allocate_processor_exact(self, executing, other);
> _Scheduler_SMP_Allocate_processor_exact(self, other, executing);
> This works because the is executing indicator moved to the thread 
> context and is maintained at the lowest context switch level.  For 
> proper migration the scheduler must ensure that,
> 1. an heir thread other than the migrating thread exists on the source 
> processor, and
> 2. the migrating thread is the heir thread on the destination processor.
Hmmm... If I am using the normal _Scheduler_SMP_Allocate_processor()
aren't these ensured?

Joel Sherrill, Ph.D.             Director of Research & Development
joel.sherrill at OARcorp.com        On-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
Support Available                (256) 722-9985

-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0004-Add-SMP-Priority-Scheduler-with-Affinity.patch
Type: text/x-patch
Size: 34158 bytes
Desc: not available
URL: <http://lists.rtems.org/pipermail/devel/attachments/20140530/aa097e9b/attachment.bin>

More information about the devel mailing list