*.tcfg specification?

Chris Johns chrisj at rtems.org
Mon Nov 11 22:25:03 UTC 2019


On 11/11/19 7:13 pm, Sebastian Huber wrote:
> what is the purpose of the *.tcfg test configuration files? 

The tcfg files provide a way to implement the "test controls" ...

https://docs.rtems.org/branches/master/user/testing/tests.html#test-controls

.. as well as excluding a test. A test executable that is not excluded must
build and be loadable on a BSP target and executed. Tests can be excluded due to
an architecture issue (lacking an architecture feature), BSP issue (not enough
memory) or an RTEMS or tools support related issue (missing feature). A build
test run on a BSP target uses the "test banners" to publish it's state ...

https://docs.rtems.org/branches/master/user/testing/tests.html#

The existing implementation is a mix of build system controls to exclude a test
and command line defines to control the build in a standard and controlled way.
The rtems-test tool understands the test banners and adapts it's control based
on what the test reports. For example a `user-input` test is terminated and the
next test is run.

> You can disable compilation of test programs with them, e.g.
> 
> exclude: flashdisk01

Yes. There is also `rexclude` a regex filter, for example `rexclude: dl[0-9][0-9]`.

> The "testsuites/rtems-test-check.py" contains a bit more stuff. What is really
> needed? 

I suspect you will end up needing to match the functionality provided. What
rtems-test-check.py supports has grown since I introduced this support. It
started off as a simple script however it did not do what we needed and adding
features cause the performance to blow out so I switch to python and since then
more complexity has been added. The script is currently run once for each test,
that can be avoided with waf.

> I have to map this to the new build system.

Yes you will. The mapping of the tcfg files is more complex than it may appear.
The complexity is in what we need to represent and not in the implementation. In
a waf build system the configure phase could be a single pass where the tcfg
files are loaded in a single pass and then held in memory while the build
instance for each test is created.

A quick summary:

1. includes
2. excludes and include with regular expression filtering, ie exclude a bunch
then include something specific
3. test controls

Include support allows a single definition of a test profile and reuse across a
number of BSPs. The use of regex filters allow for subtle variations of how this
works.

> One approach is to use the "enabled-by" attribute to disable test programs on
> demand, e.g.:
> 
> $ cat spec/build/testsuites/samples/RTEMS-BUILD-TEST-SAMPLE-TICKER.yml

Hmm, I had not noticed .yml is being used. By default emacs does not know this
is a YAML format file. Why has this been selected over .yaml? Having to teach
editors about a new extension is painful.

> active: true
> build-type: test-program
> cflags: []
> cppflags: []
> cxxflags: []
> derived: false
> enabled-by:
> - not: DISABLE_TEST_SAMPLE_TICKER
> features: c cprogram
> header: ''
> includes: []
> level: 1.15
> links: []
> normative: true
> order: 0
> ref: ''
> reviewed: null
> source:
> - testsuites/samples/ticker/init.c
> - testsuites/samples/ticker/tasks.c
> stlib: []
> target: testsuites/samples/ticker.exe
> text: ''
> type: build
> use: []
> 
> A BSP can use a "build-type: option" file to disable tests, e.g.
> 
> actions:
> - disable-tests:
>   - SAMPLE/TICKER
>   - LIB/FLASHDISK01

I would be careful before heading down this path. Lets assume 150 BSPs and 570
tests, that is 85,500 tests and test states. Adding a modest number of test
cases could effect the readability of these files. I think you will need to
structure this rather than having it expressed as a flat list.

There are 77 tcfg files. There are top level definitions that are used by BSPs:

 testsuites/testdata/disable-intrcritical-tests.tcfg
 testsuites/testdata/disable-libdl-tests.tcfg
 testsuites/testdata/disable-mrfs-tests.tcfg
 testsuites/testdata/rtems.tcfg
 testsuites/testdata/small-memory-testsuite.tcfg
 testsuites/testdata/require-tick-isr.tcfg
 testsuites/testdata/disable-jffs2-tests.tcfg
 testsuites/testdata/disable-iconv-tests.tcfg

The rtems.tcfg defines the state of tests common to all BSPs, for example
`user-input` and `benchmark`. These are invariant. Others like
small-memory-testsuite.tcfg are more complex including other test profiles like
disable-iconv-tests.tcfg etc.

I can see us having test profiles based on an architecture or BSP family before
you reach the BSP. I have not checked to see if this is already happening.

If you could structure the dependency in away that reflected what there is it
would help. For example tms570ls3137_hdk_with_loader-testsuite.tcfg includes
small-memory-testsuite.tcfg plus it excludes linpack. If that BSPs spec was just
`small-memory` and `exclude: linpack` it would be readable and maintainable.

I understand wanting to use the spec support however I do not know enough about
it to offer a specific solution.

> An open issue is how and if we should set other states, e.g. expected to fail.
> We could generate a "bsp/teststate.h" file for each BSP which optionally defines
> for each test a state other than "pass", e.g.
> 
> #define TEST_SAMPLE_TICKER_STATE RTEMS_TEST_STATE_FAIL

Why not stay with defines for this specific case? I understand and agree the
issue of figuring out what can happen with a build because you need to
understand the command line but in this case the parameters are bounded and
known. You need a build instance for each test a BSP has and you can add a
specific set of defines to each build instance. Waf will track the flags in a
build and any changes will rebuild the dependent targets.

> This could be also realized with a special option action:
> 
> actions:
> - set-test-states:
>   - SAMPLE/TICKER: disable
>   - LIB/FLASHDISK01: xfail

I have avoided abbreviations so far and used `expected-fail`. I would like it to
stay that way if that is OK.

> The "set-test-states" action can validate if the state exists. I can use the
> "enable-by" for "disable" states and the "bsp/teststate.h" file for other states.

My concerns with a single test state header are:

1. size, we have a lot of tests
2. slipping into adding things which should not be there but could be, ie adding
C code to control something
3. potential cross-talk between tests, ie I borrow a piece of code from another
test and do not change a dependent define.

I also like the idea of a single place to inspect how tests are configured.

Chris


More information about the devel mailing list