<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Sep 24, 2019, 7:37 PM Chris Johns <<a href="mailto:chris@contemporary.net.au" target="_blank" rel="noreferrer">chris@contemporary.net.au</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 25/9/19 1:08 am, Gedare Bloom wrote:<br>
> On Sat, Sep 21, 2019 at 9:41 AM Joel Sherrill <<a href="mailto:joel@rtems.org" rel="noreferrer noreferrer" target="_blank">joel@rtems.org</a>> wrote:<br>
>> On Sat, Sep 21, 2019, 10:18 AM <<a href="mailto:dufault@hda.com" rel="noreferrer noreferrer" target="_blank">dufault@hda.com</a>> wrote:<br>
>>>> On Sep 21, 2019, at 11:03 , Joel Sherrill <<a href="mailto:joel@rtems.org" rel="noreferrer noreferrer" target="_blank">joel@rtems.org</a>> wrote:<br>
>>>> On Sat, Sep 21, 2019, 9:55 AM Peter Dufault <<a href="mailto:dufault@hda.com" rel="noreferrer noreferrer" target="_blank">dufault@hda.com</a>> wrote:<br>
>>>> I’ve searched but can’t find anywhere. I’d like to see the results of the tests on all architectures to compare to what I see on PowerPC-beatnik.<br>
>>>> There is a build@ mailing list and the archives are at <a href="https://lists.rtems.org/pipermail/build/" rel="noreferrer noreferrer noreferrer" target="_blank">https://lists.rtems.org/pipermail/build/</a><br>
>>>> There should be results from at least me for psim.<br>
>>>> You are encouraged to subscribe to the list and post results. Many boards have no results.<br>
>>> That doesn’t look like what I want. I’m looking for something like the following (a small snippet of my test run in progress) to see what failures are shared by what board support packages.<br>
>>><br>
>>> [141/597] p:128 f:7 u:2 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: telnetd01.exe<br>
>>> [142/597] p:129 f:7 u:2 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios.exe<br>
>>> [143/597] p:129 f:7 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios01.exe<br>
>>> [144/597] p:129 f:8 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios02.exe<br>
>>> [145/597] p:129 f:9 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios03.exe<br>
>>> [146/597] p:130 f:9 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios04.exe<br>
>>> [147/597] p:131 f:9 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios05.exe<br>
>><br>
>> Most are for tools builds. I have a script I run periodically which includes tools and bsps. This is a psim results post: <a href="https://lists.rtems.org/pipermail/build/2019-August/002973.html" rel="noreferrer noreferrer noreferrer" target="_blank">https://lists.rtems.org/pipermail/build/2019-August/002973.html</a><br>
>><br>
>> It shows the failures but still doesn't show the in process view. Does that help at all?<br>
>><br>
> Just to jump in, as I recall the "in process" view is not always<br>
> useful, since it does not exactly tell you what has failed (when you<br>
> run several tests in parallel, that is). Collecting the end results<br>
> of could be nice. <br>
<br>
Sorry, I am not following what is being said here. The tester collects the<br>
results for a single run as shown in ...<br>
<br>
<a href="https://lists.rtems.org/pipermail/build/2019-August/002970.html" rel="noreferrer noreferrer noreferrer" target="_blank">https://lists.rtems.org/pipermail/build/2019-August/002970.html</a><br>
<br>
which is ...<br>
<br>
Summary<br>
=======<br>
<br>
Passed: 578<br>
Failed: 2<br>
User Input: 6<br>
Expected Fail: 0<br>
Indeterminate: 0<br>
Benchmark: 3<br>
Timeout: 0<br>
Invalid: 0<br>
Wrong Version: 0<br>
Wrong Build: 0<br>
Wrong Tools: 0<br>
------------------<br>
Total: 589<br>
<br>
Failures:<br>
dl06.exe<br>
dl09.exe<br>
User Input:<br>
dl10.exe<br>
monitor.exe<br>
termios.exe<br>
top.exe<br>
capture.exe<br>
fileio.exe<br>
Benchmark:<br>
dhrystone.exe<br>
linpack.exe<br>
whetstone.exe<br>
<br>
Are you asking for something across separate runs of the tester?<br><br>
> Automating this and creating a colored status matrix<br>
> would be a nice little project for someone.<br>
<br>
Where would this be run and the data held?<br>
<br>
Chris<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I just meant that the view of the running tests is not useful for comparison, it is exactly this summary (the end result) that helps. If we had regular testing, parsing the results and producing a status matrix could help for understanding the tiers. I'm not saying I know how this would be accomplished, and it seems it would require coordination among community members who test on different bsps.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
</blockquote></div></div></div>