<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Nov 12, 2019 at 4:42 PM Chris Johns <<a href="mailto:chrisj@rtems.org">chrisj@rtems.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 13/11/19 1:30 am, Sebastian Huber wrote:<br>
> Hello Chris,<br>
> <br>
> thanks for your comments. I converted now all tests except the Ada tests, <br>
<br>
Nice.<br>
<br>
> tests which use the pax tool,<br>
<br>
This might help ...<br>
<br>
<a href="https://git.rtems.org/rtems_waf/tree/rootfs.py" rel="noreferrer" target="_blank">https://git.rtems.org/rtems_waf/tree/rootfs.py</a><br>
<br>
> and the dl* tests. <br>
<br>
Plus this ...<br>
<br>
<a href="https://git.rtems.org/rtems_waf/tree/dl.py" rel="noreferrer" target="_blank">https://git.rtems.org/rtems_waf/tree/dl.py</a><br>
<br>
> For an approach to address the test<br>
> states please see below.<br>
> <br>
> On 11/11/2019 23:25, Chris Johns wrote:<br>
>> On 11/11/19 7:13 pm, Sebastian Huber wrote:<br>
>>> what is the purpose of the *.tcfg test configuration files?<br>
>><br>
>> The tcfg files provide a way to implement the "test controls" ...<br>
>><br>
>> <a href="https://docs.rtems.org/branches/master/user/testing/tests.html#test-controls" rel="noreferrer" target="_blank">https://docs.rtems.org/branches/master/user/testing/tests.html#test-controls</a><br>
>><br>
>> .. as well as excluding a test. A test executable that is not excluded must<br>
>> build and be loadable on a BSP target and executed.<br>
> <br>
> There are some *norun* tests.<br>
> <br>
<br>
Ah yes. I seem to remember we gave the tests a different extension so rtems-test<br>
does not find them?<br>
<br>
>> Tests can be excluded due to<br>
>> an architecture issue (lacking an architecture feature), BSP issue (not enough<br>
>> memory) or an RTEMS or tools support related issue (missing feature). A build<br>
>> test run on a BSP target uses the "test banners" to publish it's state ...<br>
>><br>
>> <a href="https://docs.rtems.org/branches/master/user/testing/tests.html#" rel="noreferrer" target="_blank">https://docs.rtems.org/branches/master/user/testing/tests.html#</a><br>
>><br>
>> The existing implementation is a mix of build system controls to exclude a test<br>
>> and command line defines to control the build in a standard and controlled way.<br>
>> The rtems-test tool understands the test banners and adapts it's control based<br>
>> on what the test reports. For example a `user-input` test is terminated and the<br>
>> next test is run.<br>
> <br>
> Ok, it seems a test program has exactly one state for a given BSP.<br>
<br>
Yes, this is driven by the ABI, hardware profile and similar constraints.<br></blockquote><div><br></div><div>This is generally the case but we have a few BSPs where the same executable</div><div>runs on real hardware and multiple simulators. The SPARC BSPs are at the</div><div>top of this list. The pc386 is another example.</div><div><br></div><div>We currently have no way to specify that a test could pass on real hardware,</div><div>fail on simulator 1 and pass on simulator 2. For the SPARC BSPs, we now</div><div>have real HW, sis, qemu and tsim. </div><div><br></div><div>The .tcfg files don't begin to address this and I really don't have any good</div><div>ideas myself. It is just a very real life case that it would be nice to address</div><div>so we have 100% expected passes on all execution platforms.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
>>> You can disable compilation of test programs with them, e.g.<br>
>>><br>
>>> exclude: flashdisk01<br>
>><br>
>> Yes. There is also `rexclude` a regex filter, for example `rexclude:<br>
>> dl[0-9][0-9]`.<br>
>><br>
>>> The "testsuites/rtems-test-check.py" contains a bit more stuff. What is really<br>
>>> needed?<br>
>><br>
>> I suspect you will end up needing to match the functionality provided. What<br>
>> rtems-test-check.py supports has grown since I introduced this support. It<br>
>> started off as a simple script however it did not do what we needed and adding<br>
>> features cause the performance to blow out so I switch to python and since then<br>
>> more complexity has been added. The script is currently run once for each test,<br>
>> that can be avoided with waf.<br>
> <br>
> I don't want to use this script in the new build system. <br>
<br>
That is understandable. It is a purpose built script for the existing build<br>
system. I think the functionality it provided is the important part.<br>
<br>
> I want to map the<br>
> *.tcfg functionality to the build specification and use the means that are<br>
> already there as far as possible. I would like to avoid too many special cases.<br>
<br>
I am not sure I understand what you mean by special cases? Providing a exact fit<br>
does not give us space to grow and we all change shape over time, it cannot be<br>
avoided. I am a living example of that.<br>
<br>
>>> I have to map this to the new build system.<br>
>><br>
>> Yes you will. The mapping of the tcfg files is more complex than it may appear.<br>
>> The complexity is in what we need to represent and not in the implementation. In<br>
>> a waf build system the configure phase could be a single pass where the tcfg<br>
>> files are loaded in a single pass and then held in memory while the build<br>
>> instance for each test is created.<br>
>><br>
>> A quick summary:<br>
>><br>
>> 1. includes<br>
>> 2. excludes and include with regular expression filtering, ie exclude a bunch<br>
>> then include something specific<br>
>> 3. test controls<br>
>><br>
>> Include support allows a single definition of a test profile and reuse across a<br>
>> number of BSPs. The use of regex filters allow for subtle variations of how this<br>
>> works.<br>
> <br>
> In theory, we could support regular expressions in the "enabled-by" attribute.<br>
> Is this really necessary? It shouldn't be an issue to enumerate all tests that<br>
> belong to a certain group.<br>
> <br>
> $ grep --include='*.tcfg' rexclude -r .<br>
> ./testsuites/testdata/disable-libdl-tests.tcfg:rexclude: dl[0-9][0-9]<br>
<br>
The support was added because I needed to update the list of excluded libdl<br>
tests when a new test was added and I seemed to always forget until a while<br>
later Joel pings me with a breakage. A user adding a new libdl test would not<br>
know a change was needed.<br>
<br>
The functionality lets us name tests of a specific class or category and then<br>
manage that class.<br>
<br>
> It is only used for the dl* tests and it would be simple to just give a list of<br>
> tests.<br>
<br>
This is a case of something being present and it's use grows as we find more<br>
cases of it being needed. It is about providing a framework and not a specific<br>
fit. I would not place too much on there being only one instance of its use.<br>
<br>
>>> One approach is to use the "enabled-by" attribute to disable test programs on<br>
>>> demand, e.g.:<br>
>>><br>
>>> $ cat spec/build/testsuites/samples/RTEMS-BUILD-TEST-SAMPLE-TICKER.yml<br>
>><br>
>> Hmm, I had not noticed .yml is being used. By default emacs does not know this<br>
>> is a YAML format file. Why has this been selected over .yaml? Having to teach<br>
>> editors about a new extension is painful.<br>
> <br>
> Doorstop uses this extension. We would have to adjust doorstop to use another<br>
> extension.<br>
<br>
That is a shame.<br>
<br>
>>> active: true<br>
>>> build-type: test-program<br>
>>> cflags: []<br>
>>> cppflags: []<br>
>>> cxxflags: []<br>
>>> derived: false<br>
>>> enabled-by:<br>
>>> - not: DISABLE_TEST_SAMPLE_TICKER<br>
>>> features: c cprogram<br>
>>> header: ''<br>
>>> includes: []<br>
>>> level: 1.15<br>
>>> links: []<br>
>>> normative: true<br>
>>> order: 0<br>
>>> ref: ''<br>
>>> reviewed: null<br>
>>> source:<br>
>>> - testsuites/samples/ticker/init.c<br>
>>> - testsuites/samples/ticker/tasks.c<br>
>>> stlib: []<br>
>>> target: testsuites/samples/ticker.exe<br>
>>> text: ''<br>
>>> type: build<br>
>>> use: []<br>
>>><br>
>>> A BSP can use a "build-type: option" file to disable tests, e.g.<br>
>>><br>
>>> actions:<br>
>>> - disable-tests:<br>
>>> - SAMPLE/TICKER<br>
>>> - LIB/FLASHDISK01<br>
>><br>
>> I would be careful before heading down this path. Lets assume 150 BSPs and 570<br>
>> tests, that is 85,500 tests and test states. Adding a modest number of test<br>
>> cases could effect the readability of these files. I think you will need to<br>
>> structure this rather than having it expressed as a flat list.<br>
> <br>
> You will have a flat list per BSP (with includes via item "links") just as you<br>
> have it now with the *.tcfg files.<br>
<br>
Perfect. There is no need to have the same implementation in the new build system.<br>
<br>
We need to make sure we capture the important pieces of information we have now<br>
that has taken a few years to build up ....<br>
<br>
$ grep 'small-memory-testsuite' `find . -name \*.tcfg` | wc -l<br>
27<br>
<br>
We have 27 small memory BSPs (remove the line count and you see the list). If a<br>
new test is added that cannot run on small memory BSPs we need a simple way in<br>
one location to update the list. This applies to each of the top level test<br>
controls we have.<br>
<br>
>> There are 77 tcfg files. There are top level definitions that are used by BSPs:<br>
>><br>
>> testsuites/testdata/disable-intrcritical-tests.tcfg<br>
>> testsuites/testdata/disable-libdl-tests.tcfg<br>
>> testsuites/testdata/disable-mrfs-tests.tcfg<br>
>> testsuites/testdata/rtems.tcfg<br>
>> testsuites/testdata/small-memory-testsuite.tcfg<br>
>> testsuites/testdata/require-tick-isr.tcfg<br>
>> testsuites/testdata/disable-jffs2-tests.tcfg<br>
>> testsuites/testdata/disable-iconv-tests.tcfg<br>
> <br>
> You can do includes through the "links" attribute. The number of test state<br>
> specification items will be roughly the same as the *.tcfg file count.<br>
<br>
Nice.<br>
<br>
>> The rtems.tcfg defines the state of tests common to all BSPs, for example<br>
>> `user-input` and `benchmark`. These are invariant. Others like<br>
>> small-memory-testsuite.tcfg are more complex including other test profiles like<br>
>> disable-iconv-tests.tcfg etc.<br>
>><br>
>> I can see us having test profiles based on an architecture or BSP family before<br>
>> you reach the BSP. I have not checked to see if this is already happening.<br>
>><br>
>> If you could structure the dependency in away that reflected what there is it<br>
>> would help. For example tms570ls3137_hdk_with_loader-testsuite.tcfg includes<br>
>> small-memory-testsuite.tcfg plus it excludes linpack. If that BSPs spec was just<br>
>> `small-memory` and `exclude: linpack` it would be readable and maintainable.<br>
>><br>
>> I understand wanting to use the spec support however I do not know enough about<br>
>> it to offer a specific solution.<br>
> <br>
> The "links" attribute is a very important thing in the specification item<br>
> business. <br>
<br>
I agree.<br>
<br>
> I guess I have to add some dot images to the software requirements<br>
> engineering chapter.<br>
><br>
>>> An open issue is how and if we should set other states, e.g. expected to fail.<br>
>>> We could generate a "bsp/teststate.h" file for each BSP which optionally defines<br>
>>> for each test a state other than "pass", e.g.<br>
>>><br>
>>> #define TEST_SAMPLE_TICKER_STATE RTEMS_TEST_STATE_FAIL<br>
>><br>
>> Why not stay with defines for this specific case? I understand and agree the<br>
>> issue of figuring out what can happen with a build because you need to<br>
>> understand the command line but in this case the parameters are bounded and<br>
>> known. You need a build instance for each test a BSP has and you can add a<br>
>> specific set of defines to each build instance. Waf will track the flags in a<br>
>> build and any changes will rebuild the dependent targets.<br>
>><br>
>>> This could be also realized with a special option action:<br>
>>><br>
>>> actions:<br>
>>> - set-test-states:<br>
>>> - SAMPLE/TICKER: disable<br>
>>> - LIB/FLASHDISK01: xfail<br>
>><br>
>> I have avoided abbreviations so far and used `expected-fail`. I would like it to<br>
>> stay that way if that is OK.<br>
>><br>
>>> The "set-test-states" action can validate if the state exists. I can use the<br>
>>> "enable-by" for "disable" states and the "bsp/teststate.h" file for other<br>
>>> states.<br>
>><br>
>> My concerns with a single test state header are:<br>
>><br>
>> 1. size, we have a lot of tests<br>
> <br>
> It doesn't matter much where you place the information. With the CPPFLAGS<br>
> approach the information just moves to the configuration set environment.<br>
<br>
I am not sure yet. I suppose in time we will find out.<br>
<br>
>> 2. slipping into adding things which should not be there but could be, ie adding<br>
>> C code to control something<br>
>> 3. potential cross-talk between tests, ie I borrow a piece of code from another<br>
>> test and do not change a dependent define.<br>
> <br>
> I think 2. and 3. are theoretical issues since the content of the header file<br>
> would be determined by the build option item actions.<br>
<br>
Item 2 is about a policy for changes in this area and rejecting patches that<br>
step into adding C code type functionality.<br>
<br>
I can see 3. happening, it is only a matter of time.<br>
<br>
>> I also like the idea of a single place to inspect how tests are configured.<br>
> <br>
> Currently you have an optional *.tcfg per base BSP variant with an ability to<br>
> include files. You also have a magic rtems.tcfg which is added by default. This<br>
> is not a single place to inspect from my point of view. However, I don't think a<br>
> good alternative exists.<br>
<br>
Yes, this is a weakness with the tcfg files that I was going to solve by adding<br>
a command to the top level of the source tree to provide reports. I stopped this<br>
when you started on this build system task.<br>
<br>
I have been asking for months, hmmm maybe a year or more for the tcfg files to<br>
be updated with expected-fail states so rtems-test can determine regressions. I<br>
had started to wonder if core maintainers were no sure how to make these changes<br>
and that is reasonable because there was no documentation and no method to<br>
report the effect of the change without a full build and test run. I needed to<br>
make the process easier and simpler for it be used.<br>
<br>
> I re-implemented now practically the *.tcfg files and the existing mechanism<br>
> (module the regular expressions). You can find an example here:<br>
> <br>
> <a href="https://git.rtems.org/sebh/rtems.git/tree/spec/build/bsps/RTEMS-BUILD-BSP-TSTFIXME.yml?h=build" rel="noreferrer" target="_blank">https://git.rtems.org/sebh/rtems.git/tree/spec/build/bsps/RTEMS-BUILD-BSP-TSTFIXME.yml?h=build</a><br>
<br>
Is the test name always in caps?<br>
<br>
What about ...<br>
<br>
DL[0-9][0-9]: exclude<br>
<br>
? :)<br>
<br>
> This is more or less the old rtems.tcfg.<br>
> <br>
> It is pulled in by:<br>
> <br>
> <a href="https://git.rtems.org/sebh/rtems.git/tree/spec/build/bsps/RTEMS-BUILD-BSP-BSPOPTS.yml?h=build" rel="noreferrer" target="_blank">https://git.rtems.org/sebh/rtems.git/tree/spec/build/bsps/RTEMS-BUILD-BSP-BSPOPTS.yml?h=build</a><br>
<br>
How are links and their checksum maintained?<br>
<br>
> Which in turn is included by every BSP, e.g.<br>
> <br>
> <a href="https://git.rtems.org/sebh/rtems.git/tree/spec/build/bsps/sparc/erc32/RTEMS-BUILD-BSP-SPARC-ERC32-ERC32.yml?h=build" rel="noreferrer" target="_blank">https://git.rtems.org/sebh/rtems.git/tree/spec/build/bsps/sparc/erc32/RTEMS-BUILD-BSP-SPARC-ERC32-ERC32.yml?h=build</a><br>
<br>
Does this means a change in RTEMS-BUILD-BSP-TSTFIXME.yml changes each file in<br>
the link tree as we work up to the root?<br>
<br>
> For the "set-test-state" action see:<br>
> <br>
> <a href="https://git.rtems.org/sebh/rtems.git/tree/wscript?h=build#n593" rel="noreferrer" target="_blank">https://git.rtems.org/sebh/rtems.git/tree/wscript?h=build#n593</a><br>
<br>
Nice.<br>
<br>
> Each test has a TEST_EXCLUDE_<NAME> enable and a "cppflags" attribute set to "-<br>
> ${TEST_CPPFLAGS_<NAME>}".<br>
> <br>
> The enable and cppflags are set by the "set-test-state" action and are stored in<br>
> the environment:<br>
> <br>
> $ grep -r TEST_ build/c4che/sparc/erc32_cache.py<br>
> ENABLE = ['sparc', 'erc32', 'RTEMS_NETWORKING', 'RTEMS_NEWLIB',<br>
> 'RTEMS_POSIX_API', 'RTEMS_SMP', 'BUILD_BENCHMARKS', 'BUILD_FSTESTS',<br>
> 'BUILD_LIBTESTS', 'BUILD_MPTESTS', 'BUILD_PSXTESTS', 'BUILD_PSXTMTESTS',<br>
> 'BUILD_RHEALSTONE', 'BUILD_SAMPLES', 'BUILD_SMPTESTS', 'BUILD_SPTESTS',<br>
> 'BUILD_TMTESTS', 'TEST_EXCLUDE_DL04', 'TEST_EXCLUDE_DL05', 'TEST_EXCLUDE_DL06',<br>
> 'TEST_EXCLUDE_DL07', 'TEST_EXCLUDE_DL01', 'TEST_EXCLUDE_DL02',<br>
> 'TEST_EXCLUDE_DL03', 'TEST_EXCLUDE_TAR01', 'TEST_EXCLUDE_TAR02',<br>
> 'TEST_EXCLUDE_TAR03', 'TEST_EXCLUDE_DL08', 'TEST_EXCLUDE_DL09',<br>
> 'TEST_EXCLUDE_DL10', 'TEST_EXCLUDE_MGHTTPD01']<br>
> TEST_CPPFLAGS_CAPTURE = ['-DTEST_STATE_USER_INPUT=1']<br>
> TEST_CPPFLAGS_DHRYSTONE = ['-DTEST_STATE_BENCHMARK=1']<br>
> TEST_CPPFLAGS_FILEIO = ['-DTEST_STATE_USER_INPUT=1']<br>
> TEST_CPPFLAGS_LINPACK = ['-DTEST_STATE_BENCHMARK=1']<br>
> TEST_CPPFLAGS_MONITOR = ['-DTEST_STATE_USER_INPUT=1']<br>
> TEST_CPPFLAGS_PSXFENV01 = ['-DTEST_STATE_EXPECTED_FAIL=1']<br>
> TEST_CPPFLAGS_TERMIOS = ['-DTEST_STATE_USER_INPUT=1']<br>
> TEST_CPPFLAGS_TOP = ['-DTEST_STATE_USER_INPUT=1']<br>
> TEST_CPPFLAGS_WHETSTONE = ['-DTEST_STATE_BENCHMARK=1']<br>
<br>
Looks good.<br>
<br>
I think in time we will need a reporting tool for the spec data in the source<br>
tree, for example a way to get the list of tests that are not an expected-pass.<br>
Does Doorstop provide easy to run specialised reports?<br>
<br>
Chris<br>
_______________________________________________<br>
devel mailing list<br>
<a href="mailto:devel@rtems.org" target="_blank">devel@rtems.org</a><br>
<a href="http://lists.rtems.org/mailman/listinfo/devel" rel="noreferrer" target="_blank">http://lists.rtems.org/mailman/listinfo/devel</a></blockquote></div></div>