[RTEMS Project] #3977: Add unexplained-failure as option for Test Configuration Files
RTEMS trac
trac at rtems.org
Sun May 10 16:34:31 UTC 2020
#3977: Add unexplained-failure as option for Test Configuration Files
----------------------------+--------------------
Reporter: Joel Sherrill | Owner: joel@…
Type: enhancement | Status: new
Priority: normal | Milestone: 6.1
Component: test | Version:
Severity: normal | Keywords:
Blocked By: | Blocking:
----------------------------+--------------------
RTEMS has many BSPs and it is a large amount of work to investigate each
test execution that has an unexpected result. The tcfg files were
initially created to just avoid trying build tests where the executable
did not fit into target memory. That has grown to incorporate knowledge
about which tests require human input and are not automated. This ticket
is to add a new test state for "unexplained failure".
If a test has been investigated and has an explanation, it is a known
failure. This could be associated with a ticket (e.g. dynamic loading or
fenv not supported on an architecture) or just a comment explaining it
(e.g. mips jmr3904 simulator trapping invalid memory accesses before the
trap handler is tripped).
But many tests are currently just marked as failures because they do fail
but no one has investigated and explained it. The addition of "unexplained
failure" is to capture those. This will make it possible for someone
looking to know they need investigating but are known to fail. Marking
these are "known failure" permanently puts them in a category where they
will not get looked at again.
This has two side-effects. The first is that we can have test results with
no "unexpected failures." They are either known and explained or known and
need investigation. This helps anyone looking at test results since there
should be no unexpected failures. The second side-effect is that it
hopefully captures the set of repeatedly failing tests and encourages us
to investigate and file tickets or explain.
The milestone for this is **NOT** 5.x because this is moving the goalpost
for test results. Release 5.x is the first with the body of test results
data we have. The build@ mailing list has them and can be compared by any
user interested. We also do not have a way to machine check results yet
which would ease checking the test results for deviations.
Note: As an odd case, there are multiple BSPs where the executables can
run on hardware or 1+ simulators. Sometimes the results vary across the
target environment even for the same executable. This is at least
something to consider as we want to have a trustworthy and checkable set
of test results available.
--
Ticket URL: <http://devel.rtems.org/ticket/3977>
RTEMS Project <http://www.rtems.org/>
RTEMS Project
More information about the bugs
mailing list