[PATCH 2/2] score: Implement forced thread migration

Sebastian Huber sebastian.huber at embedded-brains.de
Mon May 5 06:51:14 UTC 2014


On 2014-05-03 00:46, Chris Johns wrote:
> On 2/05/2014 9:39 pm, Sebastian Huber wrote:
>> The current implementation of task migration in RTEMS has some
>> implications with respect to the interrupt latency. It is crucial to
>> preserve the system invariant that a task can execute on at most one
>> processor in the system at a time. This is accomplished with a boolean
>> indicator in the task context. The processor architecture specific
>> low-level task context switch code will mark that a task context is no
>> longer executing and waits that the heir context stopped execution
>> before it restores the heir context and resumes execution of the heir
>> task. So there is one point in time in which a processor is without a
>> task. This is essential to avoid cyclic dependencies in case multiple
>> tasks migrate at once. Otherwise some supervising entity is necessary to
>> prevent life-locks. Such a global supervisor would lead to scalability
>> problems so this approach is not used. Currently the thread dispatch is
>> performed with interrupts disabled. So in case the heir task is
>> currently executing on another processor then this prolongs the time of
>> disabled interrupts since one processor has to wait for another
>> processor to make progress.
>>
>
> Do you maintain a stat in the per processor data and/or task counting the
> number of times a running task is moved to run on another processor ? To me it
> would seem excessive switching of this kind points to a user bug in how they
> are managing the scheduling.

At the moment there are no statistics for this, but I may add this later.

>
> I think I see what you are addressing and have no issues there however I am
> struggling to under why this would happen in a balanced system, ie why move a
> running task just to make it keep running, and if migration is the issue is
> this a result of dynamic migration or some other factor.

There are three reasons why tasks migrate in RTEMS.

  o The scheduler changes explicitly via rtems_task_set_scheduler() or similar 
directives.

  o The task resumes execution after a blocking operation. On a priority based 
scheduler it will evict the lowest priority task currently assigned to a 
processor in the processor set managed by the scheduler instance.

  o The task moves temporarily to another scheduler instance due to locking 
protocols like Migratory Priority Inheritance or the Multiprocessor Resource 
Sharing Protocol.

>
>> It is difficult to avoid this issue with the interrupt latency since
>> interrupts normally store the context of the interrupted task on its
>> stack. In case a task is marked as not executing we must not use its
>> task stack to store such an interrupt context. We cannot use the heir
>> stack before it stopped execution on another processor. So if we enable
>> interrupts during this transition we have to provide an alternative task
>> independent stack for this time frame. This issue needs further
>> investigation.
>
> If the processor is idle would you have implicitly switched to the IDLE context
> of which there is one per processor ? Would it's stack be available ?

There is no idle context.  You have some tasks that are always ready, the idle 
tasks.

At the moment this interrupt latency problem here is nothing to worry about. 
The by far biggest problem we have currently in this area is the Giant lock.  I 
would go as far as to say that with this Giant lock, we don't have a real-time 
operating system on SMP.

-- 
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax     : +49 89 189 47 41-09
E-Mail  : sebastian.huber at embedded-brains.de
PGP     : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.



More information about the devel mailing list