Tests Known To Fail with Simulator Idle Clock

Chris Johns chrisj at rtems.org
Sat Apr 26 06:51:38 UTC 2014


On 26/04/2014 4:56 am, Joel Sherrill wrote:
> Hi
>
> As Chris is wrapping up the work on not building the BSP/test
> combinations which are known to run out of memory when
> linking, I thought I would gather the information about another
> large class of test failures.
>
> Some BSPs are for simulators. Some of those simulators do not
> support interrupts or a clock tick source.[1] Thus the only way to
> mimic the passage of time is with the "sim clock idle" code. This
> is a clock device driver which is just an idle task which calls
> rtems_clock_tick() until some higher priority task is unblocked
> and it is preempted.
>
> Some tests were designed to assume a real clock tick which can
> result in a preemption even from a busy loop. These tests may
> or may not need to assume that but, at the moment, they do.
> I am certain that sp04 and *intrcrit* do indeed require a clock
> tick interrupt to work.
>
> I have attached a list of the 24 BSPs which use the simulated
> clock tick and the 47 tests which will never complete execution
> without a real interrupt driven clock tick.
>
> When running the tests, these are known to fail but this is not
> accounted for by anything in the build or test framework. These
> tests will run until a time limit is reached. On most of our simulator
> scripts, this is 180 seconds.
>
> For a single test execution cycle, this wastes approximately 142
> minutes per BSP with this characteristic. Over 24 BSPs, that is
> about 56 hours and 15 minutes wasted.
>

This does add up for the rtems-test which runs tests on simulators 
concurrently. For me the sis takes a little over 4 minutes to run all 
tests so any time outs hold cores for 3 minutes.

> There are two possibilities of what to do:
>
> + not build these for these BSPs
> + build but do not execute these for these BSPs
>
> Personally I like building them and not executing them. It is better
> to build all the code we can for tool and RTEMS integrity purposes.

I have to agree with you as I suspect some architectures only have a 
simulator bsp.

Can these tests detect a simulated clock and just abort and be seen as a 
fail ? We can then list the tests as expected fails.

>
> Whatever we decide to do, I think we should think in generic
> terms that "test X requires capability Y which a BSP may or may
> not provide". How can we say BSP does not have capability Y?

The rtems-test will only have pass and fail. A time out or an invalid 
result is considered a fail. The state of a BSP is managed by expected 
fails and this is all I see being exported by RTEMS for each BSP. 
Anything not listed as an expected fail that fails will be considered a 
regression.

As for the question you ask, I do not have an answer.

> And thus have a master list of tests which need that capability.

The patch I posted for the tests avoids a master list. Sebastian and I 
both agreed a file per bsp is better and easier to maintain than a 
master list.

> My first thought was to integrate this information into
> rtems-testing/sim-scripts. This would build them but not execute
> them. If we take this approach, this would have to be integrated
> into the rtems-test framework as it matures.

I am wondering if the state of the tests be held in RTEMS and copied 
into the build tree and installed when RTEMS is installed. I think there 
are inherent issues if the state of 2 different packages need to be in 
sync.

I currently feel any test built is run and only tests that are expected 
fails are listed in a file. This means these tests need to find a way to 
abort and be listed as expected fails.

> They could also be integrated into Chris' "do not build"
> framework which was posted this week.
>
> I would like some discussion on this. Addressing this in a general
> way is important to getting accurate and quality test results and
> being able to execute them in a reasonable length of time.

I hope to add support to rtems-test to script the tests that require 
user input so these do not sit and time out.

Chris

>
> [1] Why use these simulators? We use them because they are
> sufficient to test most of RTEMS, run all of the GCC tests, usually
> are sufficient to debug an architecture specific problem that
> is not interrupt related, and are often built into GDB.
>
>
>
> _______________________________________________
> rtems-devel mailing list
> rtems-devel at rtems.org
> http://www.rtems.org/mailman/listinfo/rtems-devel
>



More information about the devel mailing list