*.tcfg specification?
Chris Johns
chrisj at rtems.org
Wed Nov 13 00:39:10 UTC 2019
On 13/11/19 10:09 am, Joel Sherrill wrote:
> On Tue, Nov 12, 2019 at 4:42 PM Chris Johns <chrisj at rtems.org
> <mailto:chrisj at rtems.org>> wrote:
> On 13/11/19 1:30 am, Sebastian Huber wrote:
> > Hello Chris,
> >
> > thanks for your comments. I converted now all tests except the Ada tests,
>
> Nice.
>
> > tests which use the pax tool,
>
> This might help ...
>
> https://git.rtems.org/rtems_waf/tree/rootfs.py
>
> > and the dl* tests.
>
> Plus this ...
>
> https://git.rtems.org/rtems_waf/tree/dl.py
>
> > For an approach to address the test
> > states please see below.
> >
> > On 11/11/2019 23:25, Chris Johns wrote:
> >> On 11/11/19 7:13 pm, Sebastian Huber wrote:
> >>> what is the purpose of the *.tcfg test configuration files?
> >>
> >> The tcfg files provide a way to implement the "test controls" ...
> >>
> >> https://docs.rtems.org/branches/master/user/testing/tests.html#test-controls
> >>
> >> .. as well as excluding a test. A test executable that is not excluded must
> >> build and be loadable on a BSP target and executed.
> >
> > There are some *norun* tests.
> >
>
> Ah yes. I seem to remember we gave the tests a different extension so rtems-test
> does not find them?
>
> >> Tests can be excluded due to
> >> an architecture issue (lacking an architecture feature), BSP issue (not
> enough
> >> memory) or an RTEMS or tools support related issue (missing feature). A build
> >> test run on a BSP target uses the "test banners" to publish it's state ...
> >>
> >> https://docs.rtems.org/branches/master/user/testing/tests.html#
> >>
> >> The existing implementation is a mix of build system controls to exclude
> a test
> >> and command line defines to control the build in a standard and
> controlled way.
> >> The rtems-test tool understands the test banners and adapts it's control
> based
> >> on what the test reports. For example a `user-input` test is terminated
> and the
> >> next test is run.
> >
> > Ok, it seems a test program has exactly one state for a given BSP.
>
> Yes, this is driven by the ABI, hardware profile and similar constraints.
>
>
> This is generally the case but we have a few BSPs where the same executable
> runs on real hardware and multiple simulators. The SPARC BSPs are at the
> top of this list. The pc386 is another example.
>
> We currently have no way to specify that a test could pass on real hardware,
> fail on simulator 1 and pass on simulator 2. For the SPARC BSPs, we now
> have real HW, sis, qemu and tsim.
>
> The .tcfg files don't begin to address this and I really don't have any good
> ideas myself. It is just a very real life case that it would be nice to address
> so we have 100% expected passes on all execution platforms.
I think it is a difficult problem to try and solve in RTEMS with test states. I
think this is best handled in the eco-system with rtems-test and anything that
processes the results it generates. In the eco-system we have more flexibility
to define a hardware profile and assign a BSP to it.
A hardware test result is of higher importance that a simulator result and a
simulator test result is of higher importance than no test results. The
expected-fail state is feedback driven and we cannot set this state without a
suitable set of results. The tier a BSP resides in defines how you view the test
state for a specific BSP.
Here are some observations ...
1. By definition an accurate simulator will always match the hardware test results
2. We need to provide a workable way to capture the test results and feedback
this back into the tiers
3. The tiers need to be current and visible to be meaningful to our users
4. We have no expected-fail states in any tests and we should ...
$ grep 'expected-fail' `find . -name \*.tcfg`` | wc -l
0
Chris
More information about the devel
mailing list