Where are results of rtems-tester test results archived?
chrisj at rtems.org
Wed Sep 25 01:42:15 UTC 2019
On 25/9/19 1:08 am, Gedare Bloom wrote:
> On Sat, Sep 21, 2019 at 9:41 AM Joel Sherrill <joel at rtems.org> wrote:
>> On Sat, Sep 21, 2019, 10:18 AM <dufault at hda.com> wrote:
>>>> On Sep 21, 2019, at 11:03 , Joel Sherrill <joel at rtems.org> wrote:
>>>> On Sat, Sep 21, 2019, 9:55 AM Peter Dufault <dufault at hda.com> wrote:
>>>> I’ve searched but can’t find anywhere. I’d like to see the results of the tests on all architectures to compare to what I see on PowerPC-beatnik.
>>>> There is a build@ mailing list and the archives are at https://lists.rtems.org/pipermail/build/
>>>> There should be results from at least me for psim.
>>>> You are encouraged to subscribe to the list and post results. Many boards have no results.
>>> That doesn’t look like what I want. I’m looking for something like the following (a small snippet of my test run in progress) to see what failures are shared by what board support packages.
>>> [141/597] p:128 f:7 u:2 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: telnetd01.exe
>>> [142/597] p:129 f:7 u:2 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios.exe
>>> [143/597] p:129 f:7 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios01.exe
>>> [144/597] p:129 f:8 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios02.exe
>>> [145/597] p:129 f:9 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios03.exe
>>> [146/597] p:130 f:9 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios04.exe
>>> [147/597] p:131 f:9 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios05.exe
>> Most are for tools builds. I have a script I run periodically which includes tools and bsps. This is a psim results post: https://lists.rtems.org/pipermail/build/2019-August/002973.html
>> It shows the failures but still doesn't show the in process view. Does that help at all?
> Just to jump in, as I recall the "in process" view is not always
> useful, since it does not exactly tell you what has failed (when you
> run several tests in parallel, that is). Collecting the end results
> of could be nice.
Sorry, I am not following what is being said here. The tester collects the
results for a single run as shown in ...
which is ...
User Input: 6
Expected Fail: 0
Wrong Version: 0
Wrong Build: 0
Wrong Tools: 0
Are you asking for something across separate runs of the tester?
> Automating this and creating a colored status matrix
> would be a nice little project for someone.
Where would this be run and the data held?
More information about the devel