[Bug 1647] Modular SuperCore Scheduler

bugzilla-daemon at rtems.org bugzilla-daemon at rtems.org
Sat Aug 28 20:31:14 UTC 2010


https://www.rtems.org/bugzilla/show_bug.cgi?id=1647

--- Comment #28 from Gedare <giddyup44 at yahoo.com> 2010-08-28 15:31:12 CDT ---
(In reply to comment #25)
> (In reply to comment #24)
> > Created an attachment (id=1032)
 --> (https://www.rtems.org/bugzilla/attachment.cgi?id=1032) [details]
[details]
> > Tmtests output on sis after applying patch
> > 
> > This is the output of the tmtests after applying the patch.  A couple of cases
> > had significant increases:
> > 
> > < rtems_task_wake_after: yield -- returns to caller 25
> > < rtems_task_wake_after: yields -- preempts caller 89
> > ---
> > > rtems_task_wake_after: yield -- returns to caller 36
> > > rtems_task_wake_after: yields -- preempts caller 95
> > 
> > So deciding nothing has to be done got worse (a lot).  This shows up a lot like
> > below:
> > 
> > 24,26c24,26
> > < rtems_task_restart: ready task -- preempts caller 121
> > < rtems_semaphore_release: task readied -- returns to caller 49
> > < rtems_task_create 246
> > ---
> > > rtems_task_restart: ready task -- preempts caller 124
> > > rtems_semaphore_release: task readied -- returns to caller 60
> > > rtems_task_create 267
> > 
> > The preempt case is slower but not much.  The returns to caller (e.g. no
> > dispatch needed) is MUCH slower.  Ditto for task create.
> > 
> > I am thinking there much be a much fuller evaluation of what to do now than
> > there was.  What happened?
> 
> I think too much flexibility was built-in. I would say that accessing the
> scheduling data structures now takes an extra 2 function calls via the function
> pointers.  I'm sure this is where the added overhead comes from.  We can
> probably cut this down to 1 function call by hard-wiring the ready queue based
> on the scheduler implementation.

The extra overhead came primarily from doing bit-scan operations in cases where
a short-circuit is possible by knowing that the heir either changed explicitly
(e.g. by unblocking a higher priority thread than the currently executing) or
in cases where there were no other threads ready to take over the CPU from a
yielding thread.

I have updated the code and checked that the overheads appear to have gone away
in at least tm24 and tm04. Note that latency on task_create and task_delete are
still increased, because there is some additional allocation/freeing for
per-thread scheduling metadata. I thought this is OK, if it is a problem we can
probably re-design the Thread_Control to use a union of pre-allocated
structures, at the sacrifice of reduced space overhead for schedulers that
don't need as much per-thread metadata.

Joel, could you run the new patch and tar files on the same hardware and post
the new results?

-- 
Configure bugmail: https://www.rtems.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are watching all bug changes.



More information about the bugs mailing list