Round-robing/timesliced tasks disturbed by rate-monotonic task

Joel Sherrill joel.sherrill at OARcorp.com
Wed Jun 20 14:00:26 UTC 2012


On 06/19/2012 04:02 PM, Gedare Bloom wrote:
> On Tue, Jun 19, 2012 at 4:08 PM, Wendell Silva<silvawp at gmail.com>  wrote:
>> Hello RTEMS Gurus!
>>
>> Environment:
>>    - RTEMS 4.10.2
>>    - BSP i386/pc386
>>
>> Summary:
>>    - ticks per timeslice = 5
>>    - milliseconds per tick = 5
>>    - Task A, PREEMPT | TIMESLICE, priority = 10, "number crusher" never yield
>> CPU voluntarily.
>>    - Task B, PREEMPT | TIMESLICE, priority = 10, "number crusher" never yield
>> CPU voluntarily.
>>    - Task C, PREEMPT | NO_TIMESLECE, priority = 5, periodic (rate-monotonic),
>> period = 5 ticks (25ms), CPU usage = ~50%
>>
>> What was expected:
>>    - Task C running periodically, as programmed.
>>    - Tasks A and A, using the remaining CPU budget, (~25% each one, in this
>> configuration).
>>
>> What was observed:
>>    - Task C running periodically, as programmed (passed).
>>    - Only task A is running.
>>    - Task B never runs.
>>
>> "Workarounds" applied to achieve the expected behavior:
>>     - 1: decrease ticks per timeslice; or,
>>     - 2: decrease task C CPU budget (larger period or less computations).
>>
>> I believe the general form of the problem is equivalent to answer:
>>    - Why timesliced tasks gets starved, when:
>>         * ticks per timeslice is equals to a period of a RM task, or
>>         * when the CPU use of RM task is greater than or equals to 50%.
>>
>> Is the RM scheduling police interfering in timeslice accounting?
>>
> Yes. I have some idea of what is happening.
> 1. C executes for ~3 ticks.
> 2. A executes for 2 ticks, has budget=3 remaining when C fires.
> 3. C executes for 3 ticks; then _Thread_Dispatch sees A as heir and
> replenishes A's timeslice
> 4. A executes for 2 ticks, has budget=3 remaining when C fires
> 5. goto 3
>
> The RM task is scheduled by a watchdog timer that fires during the
> rtems clock_tick, but has no particular knowledge about the tasks that
> are time-sliced. So when the RM task finishes its period, it
> dispatches back to the thread it interrupted and replenishes that
> task's budget without making any other checks.
>
> I don't know if this behavior is a bug. To obtain the behavior you
> desire you could augment _Thread_Dispatch to check if the
> heir->cpu_time_budget == 0 before replenishing it if changing RTEMS
> internally is an option for you. If not then you have to do some
> finagling with the task parameters as you indicated, or submit a bug
> report and convince someone that the behavior described here is a bug
> and should be fixed and back-ported. :)
>
> If you can hook up to a debugger you can 'break' at _Thread_Dispatch
> (or rtems_clock_tick) and check what the values are for
> _Thread_Executing and _Thread_Heir and their cpu_time_budget (if
> either is timeslicing) to verify the behavior I described.


I think you are pushing the edge of the granularity of the math
on the clock tick rate, ticks per timeslice and the time required
by A.

I think you might have a CPU execution budget problem
where there is not enough slack time to really perform the
cruncher operations in the way you think given the 5 millisecond
clock tick. I think you logically want this.

A 25 msecs
B 25 msecs
A 25 msecs
C 25 msecs

But when B or C get interrupted, they have time left on their
timeslice - due to rounding - and the task order doesn't match
what one would draw. Your timeline is based on 100% CPU utilization,
no rounding, etc.  When A preempts, you are almost guaranteed
it does so with a partial timeslice.  The Classic API (by design)
awards a full quantum when the task is switched back in so
you will never switch to C.

Note that even if the time quantum is not replenished, the same
task would most likely run before and after A every time. It has
to run a full 25 msecs and if the planets do not align, this will
always be off in the real timeline on HW.

As one experiment, change the clock tick rate to 1 millisecond
and adjust the time slice quantum to 25. That may help
some. But I doubt it will fix it.

I think a simple solution is to make the timeslice quantum less than
the period of A. Say 5 or 10 milliseconds with a tick of 1 milliseconds.

If the task never consumes its entire quantum, then it can never
yield the CPU. And this sounds like what you see.

Remember me saying that you average a 1/2 tick quantum error
for all tick based operations? You have created a task and timing
combination that really highlights that. :)

FWIW the POSIX API defines that the thread's timeslice quantum is
replenished when it expires (not when switched back in). If you used
pthread's, you likely would see more of what you expect.  There
was talk of adding an attribute to use this algorithm on Classic API
tasks but no one ever stepped up to implement, document, and
test it.  This is using the TCB field "budget_algorithm". This is another
option and one that would certainly be acceptable for submission.
If you like OAR to implement this, just ask offline.

--joel



> -Gedare
>
>> Test program attached.
>>
>> Run with:
>>    - qemu-system-i386 -kernel Debug/exer_rm
>>
>> (it what tested in a real hardware and the same behavior was observed).
>>
>> Change system.h and/or task_four to "fix".
>>
>> Best regards,
>>
>> --Wendell.
>>
>>
>> _______________________________________________
>> rtems-users mailing list
>> rtems-users at rtems.org
>> http://www.rtems.org/mailman/listinfo/rtems-users
>>
> _______________________________________________
> rtems-users mailing list
> rtems-users at rtems.org
> http://www.rtems.org/mailman/listinfo/rtems-users


-- 
Joel Sherrill, Ph.D.             Director of Research&   Development
joel.sherrill at OARcorp.com        On-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
     Support Available             (256) 722-9985





More information about the users mailing list