[RTEMS Project] #3977: Add unexplained-failure as option for Test Configuration Files
RTEMS trac
trac at rtems.org
Wed May 13 01:32:55 UTC 2020
#3977: Add unexplained-failure as option for Test Configuration Files
---------------------------+---------------------
Reporter: Joel Sherrill | Owner: joel@…
Type: enhancement | Status: new
Priority: normal | Milestone: 6.1
Component: test | Version:
Severity: normal | Resolution:
Keywords: | Blocked By:
Blocking: |
---------------------------+---------------------
Comment (by Chris Johns):
For the record the TCFG files were created to manage the states for the
tester. The change included excluding a test. The tester command had the
state checks and handling added in April 2017. The test states were also
added to `rtems.git` at this time so all states have been present and
documented in
[https://git.rtems.org/rtems/commit/testsuites/README.testdata?id=18f63c0004cc3348bc785a642da82d2a3d46db5d
README.testdata] together. I also raised #2962 at the time to indicate we
need to manage this before we release.
From my point of view and `rtems-test` nothing has changed. What was added
did not set a policy for managing failures or the relationship to
releasing and any failure, known, expected or otherwise is just a failure.
The tester either expects to see a failure or it does not for accounting
purposes only.
I have gone away and spent time looking at adding this state and the
change feels wrong because this state and any like it are basically a
logical `OR` of `expected-fail` and the tester does not care about a
policy that manages this or arrives at the expected fail state. The end
goal is to have `./waf test` that runs all the tests on a configured test
harness and `waf` returns a non-zero exit code for any regressions from
the build's baseline. I doubt we have single BSP that does this on any
test harness.
We should actively discourage specific test analysis from test results as
they are simply a tool to account for a build's baseline. The result's
stats should be bounded and consistent and provide a simple way to have a
machine determine the state of a build.
I have posted a
[https://lists.rtems.org/pipermail/devel/2020-May/059842.html detailed
alternative]. For these to work we will need a workable set of policies
for the management of test failures.
I think it is unrealistic to expect RTEMS to have 100% pass on all tests
on all BSPs on all test platforms a BSP supports. There will be failures.
There has been little or no analysis of existing tests that fail captured
and accounted for so I dispute the assertion moving them to expected
failures will stop any analysis. I believe the solution to this issue lies
else where.
I am fine with tests that have no analysis being left as unexpected
failures but we need a way for buildbot or any other CI tool to run the
test suite on selected profiles to know if a change has caused a
regression. Any test failure in a clean clone from `rtems.git` breaks the
ability to do this. This is the first thing we need to fix.
Transitioning from the current state to one where the tester can detect
regressions requires some effort. It is unfair to instantly say ''"all
commits to rtems.git must have no unexpected results for BSP XXX on YYY"''
if this is the policy. I do think we need to select a few key BSPs and
test harnesses that are required to have no unexpected results befor a
release can be made.
I would like to close this ticket and open a new one for the tester state
changes. I suggest a ticket to document the unexpected results policy be
created.
--
Ticket URL: <http://devel.rtems.org/ticket/3977#comment:1>
RTEMS Project <http://www.rtems.org/>
RTEMS Project
More information about the bugs
mailing list