Scheduler in SMP
Matias Vara
matiasevara at gmail.com
Sat Sep 29 17:58:52 UTC 2012
2012/9/29 Joel Sherrill <joel.sherrill at oarcorp.com>
> On 09/29/2012 05:42 AM, Matias Vara wrote:
>
>>
>> 2012/9/28 Joel Sherrill <joel.sherrill at oarcorp.com <mailto:
>> joel.sherrill at oarcorp.**com <joel.sherrill at oarcorp.com>>>
>>
>>
>> On 09/28/2012 02:28 PM, Matias Vara wrote:
>>
>> Hi everyone,
>>
>> I am playing with the samples of RTEMS for SMP. I am trying
>> understand the example in /smp01, this one creates one task
>> per processor but I cannot figure out how the task is set in a
>> given processor. Are there any parameter when the task is created?
>>
>>
>> This would be called task processor affinity and it is not currently
>> supported. It was left as a future enhancement.
>>
>> The current smptests were not designed to show off smp but to
>> debug it and to get coverage.
>>
>> The magic of assigning a task to a core is in the scheduler. It is
>> in cpukit/score/src/**schedulersimplesmp*. It is logically intended
>> to
>> be the same as the single core scheduler but place the top N
>> tasks onto the N cores. It takes into account priority, preemption,
>> FIFO per priority, and timeslicing just like the single core priority
>> scheduler does.
>>
>>
>> It is clear now, thanks for the answer!
>> Following the subject, then there is a one queue of task per processor?
>>
> Not in the Simple SMP Scheduler. Each Scheduler implementation is free to
> use whatever data structures it needs. The framework for Schedulers is
> event driven. The primary events are when a task blocks, is unblocked,
> and at each clock tick. A Scheduler then does what it needs to do to
> determine which threads are the "heirs" for each core. If that decision
> results in the need to replace the executing thread on a core, then
> that a context switch is needed and the dispatcher will be invoked
> to handle that part.
>
>
Once time it was very clear.
One thing that is not clear yet is each scheduler try to schedule a task
only in its own processor?, for instance core #1 does not try to execute a
thread in the core #2, maybe you have already answered that.
> Logically in RTEMS there are sets of tasks.
>
> + One set of executing tasks
> + One set of ready tasks
> + Multiple sets of blocked tasks (one set per resource)
>
> Depending on the scheduling algorithm, the set of executing tasks
> may or may not be included in the data structure for the ready set.
>
> Moving tasks between the sets and determining which ready tasks
> to execute is conceptually the foundation of RTEMS. You just have various
> resources (e.g., mutexes, message queues, delays, barriers, etc.)
> tasks can block on which give you different communication and
> synchronization patterns to build applications upon.
>
>
> The tasks are queued at the running time (when task_create is invoked)?
>> If they're queued at running time I suppose the access to such queue is
>> atomic.
>>
>> Tasks are queued and dequeued from the ready set when they block or are
> unblocked.[1] With Classic API tasks, you have to also do a task_start to
> get
> them into a ready state. With pthreads, pthread_create places the thread in
> a ready state.
>
> The operation had better be atomic. :)
>
>
>
> (1) When a task changes priority, it is implicitly blocked and then
> unblocked.
>
> Thanks in advance, Matias.
>>
>>
>>
>> --
>> MV
>>
>
>
Ok, then -at now- the access to the ready queue is not atomic, this means
that this queue is a piece of share memory without any protection of
concurrent access. As I can see, one queue for the all the task will scale
very bad in a many core arch.
Regards, MV.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20120929/dad2846a/attachment-0001.html>
More information about the devel
mailing list