Round-robing/timesliced tasks disturbed by rate-monotonic task

Wendell Silva silvawp at gmail.com
Wed Jun 20 19:05:48 UTC 2012


Thanks Joel and Gedare for the help.

Joel, when I changed the clock tick rate to 1 ms, it 'fixed' the problem.
But I think it must work with any clock tick rate! :-) Else, everything I
learned as a Computer Scientist is wrong! (LOL).

I've verified (by debugging) that (TCB)->cpu_time_budget is replenished in
two different places:
  1 - void _Thread_Dispatch( void ), currently has:
    (...)
    if ( heir->budget_algorithm ==
THREAD_CPU_BUDGET_ALGORITHM_RESET_TIMESLICE )
        heir->cpu_time_budget = _Thread_Ticks_per_timeslice;

  2 - void _Thread_Tickle_timeslice( void ):
     (...)
      if ( (int)(--executing->cpu_time_budget) <= 0 ) {
        _Thread_Reset_timeslice();
        executing->cpu_time_budget = _Thread_Ticks_per_timeslice;
      }
      break;

To fix the problem, I just prevented _Thread_Dispatch() from replenish
cpu_time_buget if it has not being exhausted yet:

void _Thread_Dispatch( void ) { ...
    if ( heir->budget_algorithm ==
THREAD_CPU_BUDGET_ALGORITHM_RESET_TIMESLICE )
       *if ( **heir->cpu_time_budget == 0) // prevent starvation*
            heir->cpu_time_budget = _Thread_Ticks_per_timeslice;

It sounds good. But, it makes me think if the first 'if' is really
necessary in _Thread_Dispatch, since 'cpu_time_buget' is being
mananged by _Thread_Tickle_timeslice()
already.
Then, I simply deleted both 'if's' and, consequently 'heir->cpu_time_budget
= _Thread_Ticks_per_timeslice;' And guess what: the problem gone for good,
as the first approach. :-)

I don't know the real impact of such modification, but I presume we must
analyse it very carefully, since _Thread_Dispatch lives in the Super Core.

So the question now is: Is the following code:
    if ( heir->budget_algorithm ==
THREAD_CPU_BUDGET_ALGORITHM_RESET_TIMESLICE )
        heir->cpu_time_budget = _Thread_Ticks_per_timeslice;
really required in _Thread_Dispatch()?


Regards,

--Wendell.

p.s.: The test code is listed below:
/**
 * @file
 *
 * RM versus RR.
 */

//==============================================================================
//                              USED INTERFACES
//==============================================================================
#include <bsp.h>
#include <rtems/cpuuse.h>
#include <stdbool.h>
#include <stdio.h>
//==============================================================================
//                        MACROS AND DATATYPE DEFINITIONS
//==============================================================================

//==============================================================================
//                     STATIC (PRIVATE) FUNCTION PROTOTYPES
//==============================================================================
static rtems_task task_rr(rtems_task_argument msg);
static rtems_task task_rm(rtems_task_argument msg);
static void halt_if_error(const char* p_msg, rtems_status_code s);
//==============================================================================
//                          STATIC GLOBAL VARIABLES
//==============================================================================

//==============================================================================
//                      IMPLEMENTATION OF PUBLIC FUNCTIONS
//==============================================================================

rtems_task Init(rtems_task_argument noargs)
{
   rtems_status_code rc;
   rtems_id rr1, rr2, rm;

   rc = rtems_task_create(
      rtems_build_name('R','R','0','1'),
      10,
      RTEMS_MINIMUM_STACK_SIZE,
      (RTEMS_PREEMPT | RTEMS_TIMESLICE),
      RTEMS_FLOATING_POINT,
      &rr1);
   halt_if_error("task RR01: unable to create task:", rc);

   rc = rtems_task_create(
      rtems_build_name('R','R','0','2'),
      10,
      RTEMS_MINIMUM_STACK_SIZE,
      (RTEMS_PREEMPT | RTEMS_TIMESLICE),
      RTEMS_FLOATING_POINT,
      &rr2);
   halt_if_error("task RR02: unable to create task:", rc);

   rc = rtems_task_create(
      rtems_build_name('R','M','O','N'),
      5,
      RTEMS_MINIMUM_STACK_SIZE,
      RTEMS_DEFAULT_MODES,
      RTEMS_FLOATING_POINT,
      &rm);
   halt_if_error("task RMON: unable to create task:", rc);

   rc = rtems_task_start(rr1, task_rr, (rtems_task_argument)"A");
   halt_if_error("task RR01: unable to start task:", rc);

   rc = rtems_task_start(rr2, task_rr, (rtems_task_argument)"B");
   halt_if_error("task RR01: unable to start task:", rc);

   rc = rtems_task_start(rm, task_rm, (rtems_task_argument)"C");
   halt_if_error("task RMON: unable to start task:", rc);

   while(true)
   {
      rtems_task_wake_after(RTEMS_MILLISECONDS_TO_TICKS(10000));
      rtems_cpu_usage_report();
      rtems_cpu_usage_reset();
   }

}

//==============================================================================
//                IMPLEMENTATION OF STATIC (PRIVATE) FUNCTIONS
//==============================================================================

static rtems_task task_rr(rtems_task_argument msg)
{
   char* p_msg = (char *)msg;

   while(true)
   {
      volatile uint32_t c;

      for (c = 0; c < 1000000; c++);
      printk(p_msg);
   }
}

static rtems_task task_rm(rtems_task_argument msg)
{
   rtems_status_code rc;
   rtems_id p;
   char* p_msg = (char *)msg;

   rc = rtems_rate_monotonic_create(rtems_build_name('r','m','0','1'), &p);

   if (rc != RTEMS_SUCCESSFUL)
   {
      printk("error: unable to create RM period: %d\n", rc);
      rtems_task_delete(rtems_task_self());
   }

   while(true)
   {
      volatile uint32_t c;

      rc = rtems_rate_monotonic_period(p, 5);
      if (rc != RTEMS_SUCCESSFUL) { printk("deadline missed!\n"); for(;;); }
      for (c = 0; c < 1000000; c++);
      printk(p_msg);
   }
}

static void halt_if_error(const char* p_msg, rtems_status_code s)
{
   if (s != RTEMS_SUCCESSFUL)
   {
      printk(p_msg);
      printk("%d\n", s);

      for(;;);
   }
}

#define CONFIGURE_INIT

#define CONFIGURE_APPLICATION_NEEDS_CLOCK_DRIVER
#define CONFIGURE_APPLICATION_NEEDS_CONSOLE_DRIVER
#define CONFIGURE_MICROSECONDS_PER_TICK
RTEMS_MILLISECONDS_TO_MICROSECONDS(5)

#define CONFIGURE_TICKS_PER_TIMESLICE           5
#define CONFIGURE_MAXIMUM_PERIODS               1
#define CONFIGURE_MAXIMUM_TASKS                 5
#define CONFIGURE_USE_DEVFS_AS_BASE_FILESYSTEM

#define CONFIGURE_RTEMS_INIT_TASKS_TABLE

#include <rtems/confdefs.h>



2012/6/20 Joel Sherrill <joel.sherrill at oarcorp.com>

> On 06/19/2012 04:02 PM, Gedare Bloom wrote:
>
>> On Tue, Jun 19, 2012 at 4:08 PM, Wendell Silva<silvawp at gmail.com>  wrote:
>>
>>> Hello RTEMS Gurus!
>>>
>>> Environment:
>>>   - RTEMS 4.10.2
>>>   - BSP i386/pc386
>>>
>>> Summary:
>>>   - ticks per timeslice = 5
>>>   - milliseconds per tick = 5
>>>   - Task A, PREEMPT | TIMESLICE, priority = 10, "number crusher" never
>>> yield
>>> CPU voluntarily.
>>>   - Task B, PREEMPT | TIMESLICE, priority = 10, "number crusher" never
>>> yield
>>> CPU voluntarily.
>>>   - Task C, PREEMPT | NO_TIMESLECE, priority = 5, periodic
>>> (rate-monotonic),
>>> period = 5 ticks (25ms), CPU usage = ~50%
>>>
>>> What was expected:
>>>   - Task C running periodically, as programmed.
>>>   - Tasks A and A, using the remaining CPU budget, (~25% each one, in
>>> this
>>> configuration).
>>>
>>> What was observed:
>>>   - Task C running periodically, as programmed (passed).
>>>   - Only task A is running.
>>>   - Task B never runs.
>>>
>>> "Workarounds" applied to achieve the expected behavior:
>>>    - 1: decrease ticks per timeslice; or,
>>>    - 2: decrease task C CPU budget (larger period or less computations).
>>>
>>> I believe the general form of the problem is equivalent to answer:
>>>   - Why timesliced tasks gets starved, when:
>>>        * ticks per timeslice is equals to a period of a RM task, or
>>>        * when the CPU use of RM task is greater than or equals to 50%.
>>>
>>> Is the RM scheduling police interfering in timeslice accounting?
>>>
>>>  Yes. I have some idea of what is happening.
>> 1. C executes for ~3 ticks.
>> 2. A executes for 2 ticks, has budget=3 remaining when C fires.
>> 3. C executes for 3 ticks; then _Thread_Dispatch sees A as heir and
>> replenishes A's timeslice
>> 4. A executes for 2 ticks, has budget=3 remaining when C fires
>> 5. goto 3
>>
>> The RM task is scheduled by a watchdog timer that fires during the
>> rtems clock_tick, but has no particular knowledge about the tasks that
>> are time-sliced. So when the RM task finishes its period, it
>> dispatches back to the thread it interrupted and replenishes that
>> task's budget without making any other checks.
>>
>> I don't know if this behavior is a bug. To obtain the behavior you
>> desire you could augment _Thread_Dispatch to check if the
>> heir->cpu_time_budget == 0 before replenishing it if changing RTEMS
>> internally is an option for you. If not then you have to do some
>> finagling with the task parameters as you indicated, or submit a bug
>> report and convince someone that the behavior described here is a bug
>> and should be fixed and back-ported. :)
>>
>> If you can hook up to a debugger you can 'break' at _Thread_Dispatch
>> (or rtems_clock_tick) and check what the values are for
>> _Thread_Executing and _Thread_Heir and their cpu_time_budget (if
>> either is timeslicing) to verify the behavior I described.
>>
>
>
> I think you are pushing the edge of the granularity of the math
> on the clock tick rate, ticks per timeslice and the time required
> by A.
>
> I think you might have a CPU execution budget problem
> where there is not enough slack time to really perform the
> cruncher operations in the way you think given the 5 millisecond
> clock tick. I think you logically want this.
>
> A 25 msecs
> B 25 msecs
> A 25 msecs
> C 25 msecs
>
> But when B or C get interrupted, they have time left on their
> timeslice - due to rounding - and the task order doesn't match
> what one would draw. Your timeline is based on 100% CPU utilization,
> no rounding, etc.  When A preempts, you are almost guaranteed
> it does so with a partial timeslice.  The Classic API (by design)
> awards a full quantum when the task is switched back in so
> you will never switch to C.
>
> Note that even if the time quantum is not replenished, the same
> task would most likely run before and after A every time. It has
> to run a full 25 msecs and if the planets do not align, this will
> always be off in the real timeline on HW.
>
> As one experiment, change the clock tick rate to 1 millisecond
> and adjust the time slice quantum to 25. That may help
> some. But I doubt it will fix it.
>
> I think a simple solution is to make the timeslice quantum less than
> the period of A. Say 5 or 10 milliseconds with a tick of 1 milliseconds.
>
> If the task never consumes its entire quantum, then it can never
> yield the CPU. And this sounds like what you see.
>
> Remember me saying that you average a 1/2 tick quantum error
> for all tick based operations? You have created a task and timing
> combination that really highlights that. :)
>
> FWIW the POSIX API defines that the thread's timeslice quantum is
> replenished when it expires (not when switched back in). If you used
> pthread's, you likely would see more of what you expect.  There
> was talk of adding an attribute to use this algorithm on Classic API
> tasks but no one ever stepped up to implement, document, and
> test it.  This is using the TCB field "budget_algorithm". This is another
> option and one that would certainly be acceptable for submission.
> If you like OAR to implement this, just ask offline.
>
> --joel
>
>
>
>
>  -Gedare
>>
>>  Test program attached.
>>>
>>> Run with:
>>>   - qemu-system-i386 -kernel Debug/exer_rm
>>>
>>> (it what tested in a real hardware and the same behavior was observed).
>>>
>>> Change system.h and/or task_four to "fix".
>>>
>>> Best regards,
>>>
>>> --Wendell.
>>>
>>>
>>> ______________________________**_________________
>>> rtems-users mailing list
>>> rtems-users at rtems.org
>>> http://www.rtems.org/mailman/**listinfo/rtems-users<http://www.rtems.org/mailman/listinfo/rtems-users>
>>>
>>>  ______________________________**_________________
>> rtems-users mailing list
>> rtems-users at rtems.org
>> http://www.rtems.org/mailman/**listinfo/rtems-users<http://www.rtems.org/mailman/listinfo/rtems-users>
>>
>
>
> --
> Joel Sherrill, Ph.D.             Director of Research&   Development
> joel.sherrill at OARcorp.com        On-Line Applications Research
> Ask me about RTEMS: a free RTOS  Huntsville AL 35805
>    Support Available             (256) 722-9985
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/users/attachments/20120620/781a71ac/attachment.html>


More information about the users mailing list