BSP Test Results
Joel Sherrill
joel at rtems.org
Tue Sep 8 22:50:34 UTC 2020
On Mon, Sep 7, 2020 at 12:44 AM Chris Johns <chrisj at rtems.org> wrote:
> On 7/9/20 2:16 pm, Joel Sherrill wrote:
> >
> >
> > On Sun, Sep 6, 2020, 9:55 PM Chris Johns <chrisj at rtems.org
> > <mailto:chrisj at rtems.org>> wrote:
> >
> > Hello,
> >
> > I would like to discuss BSP Test results early in the release cycle
> in the hope
> > we avoid the last minute issues we encountered with RTEMS 5 and the
> "expected"
> > failure state ticket.
> >
> > I would like to update this section ...
> >
> >
> https://docs.rtems.org/branches/master/user/testing/tests.html#expected-test-states
> >
> > to state there is to be a ticket for each `expected-fail` test
> state. I believe
> > this was the outcome of the discussion that took place. Please
> correct me if
> > this is not correct.
> >
> > The purpose of the `expected-fail` is to aid the accounting of the
> test results
> > to let us know if there are any regressions. We need to account for
> tests that
> > fail so we can track if a recent commit results in a new failure,
> i.e. a
> > regression. To do this we need to capture the state in a way
> `rtems-test` can
> > indicate a regression.
> >
> > I think the `indeterminate` state may need further explanation as it
> will help
> > in the cases a simulator passes a test but the test fails on some
> hardware. I am
> > currently seeing this with spcache01 on the PC BSP.
> >
> >
> > I don't mind this as long as we have a rigid format to ensure these are
> easy to
> > find.
>
> Do you mean find the location in the tcfg files?
>
No. I just mean sometimes there is a comment directly above the l
exclude line, sometimes above a group, etc. I would want something
like
exclude: dl06 # #1234 dynamic loading broken on RISC-V
Maybe even not as a comment but a ticket number so it could be
checked that every exclude had a ticket number in the form of #NNNN[N]
(four or five digits).
>
> > Perhaps info at the main be of the line marking this state. In the past,
> > we have tended to our comments before.
>
> If you mean the location in the tcfg files being printed out by the test,
> have
> no idea how to do that. What about grep?
>
grep works if on the same line. Hence the above suggestion. If we want to
add excludes, let's document which test and ticket in a way that
theoretically
we could check if the ticket is open or closed.
No way to automate that it applies to this BSP's exclude I think.
> > With the level of continuous building and testing we are currently
> doing being
> > able to easily determine a regression will become important. Check
> out the
> > example below.
> >
> > I would like to avoid us sitting with failures that do not have
> tickets and are
> > not accounted for. I know there is a lump of work to account for the
> failures
> > and after that is done I think the effort needed to maintain the
> failure states
> > will drop.
> >
> > As a result I have been pondering how I can encourage this work be
> done. I am
> > considering updating the tier-1 status to requiring there be 0
> unaccounted for
> > failures. That is the `rtems-test`'s Failure count is 0 for a
> hardware test run.
> >
> > Chris
> >
> > An example using Joel's recent test run (thanks Joel :)).
> >
> >
> > No problem. I do want to point out that you can't tell that the tests
> ran on sis
> > rather than Tsim or some specific hardware. For leon3, you can add qemu
> to the
> > list of potential "platforms". I've mentioned this before.
>
> Maybe the `rtems-test` command line option name is misleading. It is
> `--rtems-bsp=` but it is more like `--bsp-target=`. Some BSPs have more
> than one
> way to be run, for example psim and psim-run. I see in one of the tests I
> linked
> too the command line has `--rtems-bsp=leon2-sis`.
>
> > While typing this, I had the idea that maybe adding a note argument that
> gets
> > added to the report might be an easy solution. Whoever runs the test
> could
> > freeform annotate the board model, simulator and version, etc. This
> would at
> > least allow the logs to show which platform when we need to look for an
> issue.
>
> If you think there should be a list of annotations a test run needs then
> please
> create a ticket with a list.
>
This would be enough to address my "HW or which simulator" question.
>
> > Also I am not sure but hopefully the test reports do accurately reflect
> host OS.
>
> There is a "Host" section at the top of the results log? It is just `uname
> -a`.
>
I think that's sufficient as long as it can distinguish Linux
distributions.
--joel
>
> Chris
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20200908/9b5bd1c4/attachment.html>
More information about the devel
mailing list