[Bug 1808] PerCore SMP Scheduler

bugzilla-daemon at rtems.org bugzilla-daemon at rtems.org
Thu May 26 19:39:23 UTC 2011


Gedare <giddyup44 at yahoo.com> changed:

           What    |Removed                     |Added
                 CC|                            |giddyup44 at yahoo.com

--- Comment #1 from Gedare <giddyup44 at yahoo.com> 2011-05-26 14:39:23 CDT ---
Overall I like this work! :) But I do have some high-level requests and some
detailed comments/concerns. First the high-level stuff:

Please consider renaming this scheduler's functions as: Scheduler_percore_smp,
to be consistent with the simple SMP scheduler and the general naming
convention in the score.

I would like some additional documentation about this scheduler. The header
file (schedulerpercoresmp.h) is a sensible place for it. Eventually we would
also need user documentation if this gets merged.  In particular, the scheduler
algorithm that is employed should be clearly stated somewhere, as well as the
reasoning behind some of the structures, see schedulersimplesmp.h on the CVS
head for an example of the high-level comments -- add something about its use
of per-core ready queue (per-core priority), and a caveat that it does not
provide global priority scheduling.

There are some more opportunities for separate patches for some of this work. I
have identified them below.

Now for the details. Mostly these are style issues, although there is at least
one correctness question in there.

+    CONFIGURE_SMP_MAXIMUM_PROCESSORS * _Configure_From_workspace(
sizeof(Scheduler_PerCore_SMP_Information) ) \

Please try to limit line length to fewer than 80 character. This would be
rewritten as:
   CONFIGURE_SMP_MAXIMUM_PROCESSORS * _Configure_From_workspace( \
     sizeof(Scheduler_PerCore_SMP_Information) \
   ) \
or something similar.

+  /** Scheduler-specific information */
+  void *scheduler_information;

This might make a good separate patch to commit, I had considered per-core
scheduling information residing outside of the _Scheduler structure, or rather
making the _Scheduler part of the per-cpu package. This should be debated on
its own I think. Doing so can improve code sharing amongst the schedulers too.


Please avoid adding blank lines (especially two in a row).

+ *  with the manipulation of threads on a SMP-aware simple-priority-based
ready queue.
Line length.

+    _Scheduler_PerCore_SMP_Allocate,      /* allocate entry point */ \
+    _Scheduler_PerCore_SMP_Free,          /* free entry point */ \
+    _Scheduler_PerCore_SMP_Update,        /* update entry point */ \

These functions appear to be identical in functionality to existing functions
that could be used instead, for example:
    _Scheduler_simple_Allocate,      /* allocate entry point */ \
    _Scheduler_simple_Free,          /* free entry point */ \
    _Scheduler_simple_Update,        /* update entry point */ \

+void _Scheduler_PerCore_SMP_Ready_queue_Enqueue(
+void _Scheduler_PerCore_SMP_Ready_queue_Enqueue_first(
Rename with lower-case enqueue.


Rename with lower case requeue. Add a comment to state that this function
assumes it is already in a critical section / protected.

+ *  This defines the bit which indicates the interprocessor interrupt
+ *  has been requested so that RTEMS will run the scheduler on this CPU
+ *  because the currently executing thread may need to be switched out.
+ */

This could probably be raised as a separate patch to add this functionality.

See above comment about sharing code with _Scheduler_simple.

+    info = _Workspace_Allocate_or_fatal_error(
sizeof(Scheduler_PerCore_SMP_Information) );

Line length. This would be rewritten as:
    info = _Workspace_Allocate_or_fatal_error(

This function (hence scheduler?) does not use the _Scheduler.information field
for holding its internal data. This is a little bit odd, and reflects on the
above comment about using per-cpu scheduling structures.

+  /* We can run schedule directly only if the thread is 
+  on our CPU. Otherwise, we need to notify. */
Please use * to set off multi-line comments.

+  if ( _Thread_Is_executing_SMP( the_thread, core ) )
+  {
Please put { on same line as if statement.

+  Scheduler_PerCore_SMP_Information *info;
+  int core;
+  ISR_Level level;
Please align local variable names, e.g.
+  Scheduler_PerCore_SMP_Information *info;
+  int                                core;
+  ISR_Level                          level;

+/* This function is called internally. We assume that it is always
+   in the critical section. */
Add leading *

+  Scheduler_PerCore_SMP_Information *info;
+  int core;
+  ISR_Level level;
Align names.

+  level = _SMP_lock_spinlock_simple_Obtain( &(info->lock) );
Won't this cause deadlock if you call this function inside of a critical

+  Chain_Control    *ready;
+  Chain_Node       *the_node;
+  Thread_Control   *current;
+  Scheduler_PerCore_SMP_Information *info;
+  int core;
Align names.

+  for ( ; !_Chain_Is_tail( ready, the_node ) ; the_node = the_node->next ) {
Use _Chain_Next(the_node) to iterate. This may require breaking the line up,
  for ( ;
    !_Chain_Is_tail( ready, the_node ) ;
    the_node = the_node->next 
  ) {

My guess is if we switch _Scheduler to be per-cpu, this file will be redundant
to the Simple_ version.

+  for ( the_node = _Chain_First(ready) ; ; the_node = the_node->next ) {
Assignment is redundant. Use _Chain_Next(). 

My guess is if we switch _Scheduler to be per-cpu, this file will be redundant
to the Simple_ version.

+  _Per_CPU_Information[bsp_smp_processor_id()].heir = (Thread_Control *)
_Chain_First( info->rq );
Line length.

+  ISR_Level level;
+  Scheduler_PerCore_SMP_Information *info;
Align variables.

+  int core;
+  ISR_Level       level;
+  Scheduler_PerCore_SMP_Information *info;
Align variables.

+  if ( the_thread->current_priority < 
_Per_CPU_Information[core].heir->current_priority ) {
Line length.

+  ISR_Level       level;
+  Thread_Control *executing;
+  Scheduler_PerCore_SMP_Information *info;
Align variables.

Configure bugmail: https://www.rtems.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are watching all bug changes.

More information about the bugs mailing list