[Bug 1647] Modular SuperCore Scheduler

bugzilla-daemon at rtems.org bugzilla-daemon at rtems.org
Mon Aug 30 23:35:57 UTC 2010


https://www.rtems.org/bugzilla/show_bug.cgi?id=1647

--- Comment #31 from Gedare <giddyup44 at yahoo.com> 2010-08-30 18:35:55 CDT ---
(In reply to comment #30)
> Do a diff on pre and post-2 and look at the differences.  Some that stand out
> to me are:
> 
> < rtems_task_delete: blocked task 187
> > rtems_task_delete: blocked task 204
> 
> < rtems_task_delete: ready task 188
> > rtems_task_delete: ready task 205
> 
See below regarding task create/delete.

> < rtems_task_set_priority: returns to caller 50
> > rtems_task_set_priority: returns to caller 55
> 
> < rtems_task_set_priority: preempts caller 114
> > rtems_task_set_priority: preempts caller 119
> 
> < rtems_message_queue_urgent: task readied -- returns to caller 55
> > rtems_message_queue_urgent: task readied -- returns to caller 61
> 
> < rtems_message_queue_broadcast: task readied -- returns to caller 71
> > rtems_message_queue_broadcast: task readied -- returns to caller 81
> 
I traced through these test cases manually and with gdb. 
The changes for the message queue routines are in Thread_Set_state and
Thread_Clear_state.   These latencies were already reduced by the short-circuit
evaluation of Thread Heir in Scheduler_priority_Unblock and
Scheduler_priority_Block. rtems_task_set_priority directly calls ready queue
functions and the Schedule routine. 

 I see two more possibilities for where these latencies are coming:

First is that the increased function call depth is causing some problems. Is it
possible the added overhead here is from the number of new function calls that
are made where before the chains were modified directly or with inline
routines?  The additional call depth could well be causing window spill/fill
traps on the sparc. I don't know how much overhead these function calls and
traps would generate.  Second is that the lack of ISR_Flash is causing extra
work to be done.  I haven't been able to verify either case.

> On a positive note, some of the times are actually down.  
> 
> This is very close.  It looks like a couple of more places where the heir
> computation is occurring and there was a short cut before.
> 
> FWIW I am doing these runs on sis (sis tm*.exe).  That puts the output in a log
> subdirectory.  Then I "cat log/tm* >XXX.txt" Then I diff old new and compare.
> 
I don't get results that can be compared directly with yours, I guess the
results might be host dependent? Or maybe my version of sis is a bit old.

> You mentioned in am email that you are allocating memory for the scheduler in
> the task create.  This is OK just be careful that the scheduler implementation
> itself is doing this because if a user provides an external scheduler, we won't
> know the size.  Also this needs to be accounted for in confdefs.h calculations
> for memory size for each task.
> 
Right, task creation and deletion now involves some more allocation and freeing
of memory resources for per-task scheduler meteadata.  I see that adding the
size of this allocation to confdefs' calculation still needs to be done (I
already added this to the spsize).  Is the CONFIGURE_MEMORY_FOR_TASKS macro the
right place to add this space overhead?  I can define something conditional on
which scheduler the user configures pretty easily, e.g.
CONFIGURE_MEMORY_PER_TASK_FOR_SCHEDULER and add it in to this macro.

> This is very close time wise.

-- 
Configure bugmail: https://www.rtems.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are watching all bug changes.



More information about the bugs mailing list