<br><div class="gmail_quote">2012/9/29 Joel Sherrill <span dir="ltr"><<a href="mailto:joel.sherrill@oarcorp.com" target="_blank">joel.sherrill@oarcorp.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 09/29/2012 05:42 AM, Matias Vara wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
2012/9/28 Joel Sherrill <<a href="mailto:joel.sherrill@oarcorp.com" target="_blank">joel.sherrill@oarcorp.com</a> <mailto:<a href="mailto:joel.sherrill@oarcorp.com" target="_blank">joel.sherrill@oarcorp.<u></u>com</a>>><div class="im">
<br>
<br>
On 09/28/2012 02:28 PM, Matias Vara wrote:<br>
<br>
Hi everyone,<br>
<br>
I am playing with the samples of RTEMS for SMP. I am trying<br>
understand the example in /smp01, this one creates one task<br>
per processor but I cannot figure out how the task is set in a<br>
given processor. Are there any parameter when the task is created?<br>
<br>
<br>
This would be called task processor affinity and it is not currently<br>
supported. It was left as a future enhancement.<br>
<br>
The current smptests were not designed to show off smp but to<br>
debug it and to get coverage.<br>
<br>
The magic of assigning a task to a core is in the scheduler. It is<br>
in cpukit/score/src/<u></u>schedulersimplesmp*. It is logically intended to<br>
be the same as the single core scheduler but place the top N<br>
tasks onto the N cores. It takes into account priority, preemption,<br>
FIFO per priority, and timeslicing just like the single core priority<br>
scheduler does.<br>
<br>
<br>
It is clear now, thanks for the answer!<br>
Following the subject, then there is a one queue of task per processor?<br>
</div></blockquote>
Not in the Simple SMP Scheduler. Each Scheduler implementation is free to<br>
use whatever data structures it needs. The framework for Schedulers is<br>
event driven. The primary events are when a task blocks, is unblocked,<br>
and at each clock tick. A Scheduler then does what it needs to do to<br>
determine which threads are the "heirs" for each core. If that decision<br>
results in the need to replace the executing thread on a core, then<br>
that a context switch is needed and the dispatcher will be invoked<br>
to handle that part.<br><br></blockquote><div> </div><div><div style="font-family:arial,sans-serif;font-size:13px;background-color:rgb(255,255,255)">Once time it was very clear. </div><div style="font-family:arial,sans-serif;font-size:13px;background-color:rgb(255,255,255)">
One thing that is not clear yet is each scheduler try to schedule a task only in its own processor?, for instance core #1 does not try to execute a thread in the core #2, maybe you have already answered that. </div></div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Logically in RTEMS there are sets of tasks.<br>
<br>
+ One set of executing tasks<br>
+ One set of ready tasks<br>
+ Multiple sets of blocked tasks (one set per resource)<br>
<br>
Depending on the scheduling algorithm, the set of executing tasks<br>
may or may not be included in the data structure for the ready set.<br>
<br>
Moving tasks between the sets and determining which ready tasks<br>
to execute is conceptually the foundation of RTEMS. You just have various<br>
resources (e.g., mutexes, message queues, delays, barriers, etc.)<br>
tasks can block on which give you different communication and<br>
synchronization patterns to build applications upon.<div class="im"><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
The tasks are queued at the running time (when task_create is invoked)? If they're queued at running time I suppose the access to such queue is atomic.<br>
<br>
</blockquote></div>
Tasks are queued and dequeued from the ready set when they block or are<br>
unblocked.[1] With Classic API tasks, you have to also do a task_start to get<br>
them into a ready state. With pthreads, pthread_create places the thread in<br>
a ready state.<br>
<br>
The operation had better be atomic. :)<br>
<br>
<br>
<br>
(1) When a task changes priority, it is implicitly blocked and then unblocked.<div class="HOEnZb"><div class="h5"><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thanks in advance, Matias.<br>
<br>
<br>
<br>
-- <br>
MV<br>
</blockquote>
<br>
</div></div></blockquote></div><br><span style="font-family:arial,sans-serif;font-size:13px;background-color:rgb(255,255,255)">Ok, then -at now- the access to the ready queue is not atomic, this means that this queue is a piece of share memory without any protection of concurrent access. As I can see, one queue for the all the task will scale very bad in a many core arch. </span><div style="font-family:arial,sans-serif;font-size:13px;background-color:rgb(255,255,255)">
<br></div><div style="font-family:arial,sans-serif;font-size:13px;background-color:rgb(255,255,255)">Regards, MV. </div><div><br></div>