Thread Execution Control

Gedare Bloom gedare at rtems.org
Wed Nov 20 16:46:39 UTC 2013


On Wed, Nov 20, 2013 at 7:19 AM, Sebastian Huber
<sebastian.huber at embedded-brains.de> wrote:
> Hello,
>
> the following text tries to explain a problem with the current SMP
> implementation and presents one possible solution.
>
> ==== Thread Execution Control ====
>
> Currently threads are assigned to processors for execution by the scheduler
> responsible for this thread.  It is unknown to the system when a thread
> actually starts or terminates execution.  The termination event is important
> for the following features
> * explicit thread migration, e.g. if a thread should move from one scheduler
> domain to another,
> * thread deletion, since the thread stack is in use until the thread stopped
> execution, or
> * restart of threads executing on a remote processor.
>
Restating what you just said, the problem you have is that thread
start and termination/migration is not represented explicitly to the
scheduler, for example the task state transitions diagram
http://rtems.org/onlinedocs/doc-current/share/rtems/html/c_user/Scheduling-Concepts-Task-State-Transitions.html#Scheduling-Concepts-Task-State-Transitions
contains "deletion" events, but those events appear to be regular
'block' calls from the scheduler.

One exception I have to this summary is that there is in fact a hook
already for when a thread terminates, which is _Scheduler_Free().
Possibly this hook is not able to be overloaded for the requirements
stated below. Especially for the case of migration rather than
termination, when a thread leaves one scheduling domain to go to
another. (However, from the prior discussion, will there be such
cases? Partitioned/Clustered schedulers do not support migrations
usually.)

> One approach could be to spin on the per-processor variable reflecting the
> executing thread.  This has at least two problems
> # it doesn't work if the executing thread wants to alter its own state, and
> # this spinning must be done with the scheduler lock held and interrupts
> disabled, this is a disaster for the interrupt latency,
>
I don't understand this approach at all, but you dismiss it anyway so
I won't try harder.

> The proposed solution is to use an optional event handler which is active in
> case the thread execution termination matters.  In _Thread_Dispatch() we
Will this event handler be per-thread, per-cpu, or per-scheduler instance?

> have
> already the post-switch extensions invoked after a thread switch.  The only
> restriction is here that we cannot block since the executing thread might be
> an
> idle thread and it would be dramatic for the thread dispatch latency.  We
> can
> now on demand prepend an event handler to the post-switch extensions and
Is it possible to communicate the previously executing thread to this
new handler? When post-switch executes, the thread that was switched
out is no longer known.

> perform
> the actions necessary after the thread of interest stopped execution.
> Currently the post-switch extensions are registered in a global list, but we
> have to introduce per-processor lists for our purpose.  This gives rise to
What will be the purpose of these lists? Dynamically adding/remove
per-cpu post-switch events?

> locking issues.  We have to consider the following requirements
> * prepending to and removal from the list should be performed under
> protection of the per-processor lock,
> * forward iteration through the list should be possible without locking
> (atomic operations are required here),
> * removal of nodes during iteration must be possible,
> * it is acceptable that nodes added after iteration begin are not visited
> during the iteration in progress.
>
Sounds like a good use for something like RCU, but not RCU, since
AFAIK that is patent-encumbered in a non-RTEMS friendly way.

> The execution time of post-switch event handlers increases the worst-case
> thread dispatch latency.
>
> On demand post-switch handlers help to implement the Multiprocessor Resource
> Sharing Protocol (MrsP) proposed by Burns and Wellings.  Threads executing a
> global critical section can add a post-switch handler which will trigger the
> thread migration in case of pre-emption by a local high-priority thread.
>
This is interesting. The requirement "the supporting RTOS must be
aware that there is a separate priority associated with each processor
in the thread’s affinity set" implies a bit of added complexity and
shared data in the scheduling logic, in addition to support for thread
migration. I could see migration within a cluster being useful with
this sharing protocol; I'm not sure how it will work with a fixed
partitioned scheduler.

-Gedare

> --
> Sebastian Huber, embedded brains GmbH
>
> Address : Dornierstr. 4, D-82178 Puchheim, Germany
> Phone   : +49 89 189 47 41-16
> Fax     : +49 89 189 47 41-09
> E-Mail  : sebastian.huber at embedded-brains.de
> PGP     : Public key available on request.
>
> Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
> _______________________________________________
> rtems-devel mailing list
> rtems-devel at rtems.org
> http://www.rtems.org/mailman/listinfo/rtems-devel




More information about the devel mailing list