<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Sep 21, 2019, 6:21 PM Chris Johns <<a href="mailto:chrisj@rtems.org">chrisj@rtems.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
On 22/9/19 7:58 am, <a href="mailto:dufault@hda.com" target="_blank" rel="noreferrer">dufault@hda.com</a> wrote:<br>
> <br>
> <br>
>> On Sep 21, 2019, at 17:49 , <<a href="mailto:dufault@hda.com" target="_blank" rel="noreferrer">dufault@hda.com</a>> <<a href="mailto:dufault@hda.com" target="_blank" rel="noreferrer">dufault@hda.com</a>> wrote:<br>
>><br>
>><br>
>><br>
>>> On Sep 21, 2019, at 16:41 , Chris Johns <<a href="mailto:chrisj@rtems.org" target="_blank" rel="noreferrer">chrisj@rtems.org</a>> wrote:<br>
>>><br>
>>> On 22/9/19 1:18 am, <a href="mailto:dufault@hda.com" target="_blank" rel="noreferrer">dufault@hda.com</a> wrote:<br>
>>>>> On Sep 21, 2019, at 11:03 , Joel Sherrill <<a href="mailto:joel@rtems.org" target="_blank" rel="noreferrer">joel@rtems.org</a>> wrote:<br>
>>>>> On Sat, Sep 21, 2019, 9:55 AM Peter Dufault <<a href="mailto:dufault@hda.com" target="_blank" rel="noreferrer">dufault@hda.com</a>> wrote:<br>
>>>>> I’ve searched but can’t find anywhere. I’d like to see the results of the tests on all architectures to compare to what I see on PowerPC-beatnik.<br>
>>>>><br>
>>>>> There is a build@ mailing list and the archives are at <a href="https://lists.rtems.org/pipermail/build/" rel="noreferrer noreferrer" target="_blank">https://lists.rtems.org/pipermail/build/</a><br>
>>>>><br>
>>>>> There should be results from at least me for psim.<br>
>>>>><br>
>>>>> You are encouraged to subscribe to the list and post results. Many boards have no results.<br>
>>>>><br>
>>>> That doesn’t look like what I want. I’m looking for something like the following (a small snippet of my test run in progress) to see what failures are shared by what board support packages.<br>
>>>><br>
>>>> [141/597] p:128 f:7 u:2 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: telnetd01.exe<br>
>>>> [142/597] p:129 f:7 u:2 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios.exe<br>
>>>> [143/597] p:129 f:7 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios01.exe<br>
>>>> [144/597] p:129 f:8 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios02.exe<br>
>>>> [145/597] p:129 f:9 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios03.exe<br>
>>>> [146/597] p:130 f:9 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios04.exe<br>
>>>> [147/597] p:131 f:9 u:3 e:0 I:0 B:3 t:0 i:0 W:0 | powerpc/beatnik: termios05.exe<br>
>>><br>
>>> The --log option should place all the test results in a log. Is that what you want?<br>
>>><br>
>>> There are other options that let you log the console data as well. This is<br>
>>> disabled by default to make the log more compact.<br>
>>><br>
>><br>
>> I want a collection of results for all tested BSPs so that I can compare what I’m seeing with “beatnik” with others.<br>
>><br>
>> I thought there was a collection of systems that went through automated tests, an RTEMS testing lab of sorts. Am I confused? I was ready to try to get an MVME5500 and whatever else was needed to the lab.<br>
>><br>
> Beatnik finished:<br>
> [595/597] p:563 f:19 u:6 e:0 I:0 B:3 t:1 i:2 W:0 | powerpc/beatnik: tmonetoone.exe<br>
> [596/597] p:564 f:19 u:6 e:0 I:0 B:3 t:1 i:2 W:0 | powerpc/beatnik: tmoverhd.exe<br>
> [597/597] p:565 f:19 u:6 e:0 I:0 B:3 t:1 i:2 W:0 | powerpc/beatnik: tmtimer01.exe<br>
> Passed: 565<br>
> Failed: 20<br>
> User Input: 6<br>
> Expected Fail: 0<br>
> Indeterminate: 0<br>
> Benchmark: 3<br>
> Timeout: 1<br>
> Invalid: 2<br>
> Wrong Version: 0<br>
> Wrong Build: 0<br>
> Wrong Tools: 0<br>
> ------------------<br>
> Total: 597<br>
> Average test time: 0:00:20.652608<br>
> Testing time : 3:25:29.607257<br>
<br>
The results are relative to the BSP. It would be nice to say we can expect all<br>
tests to pass on all BSPs however the likelihood of that happening is low. To<br>
manage this so a user of a specific BSP can see if there are regressions tests<br>
can be tagged with the expected result for a specific BSP. If a test is known to<br>
fail the test can be set to expected-fail for a BSP and you should see no<br>
failures, invalids or timeouts. I raised a ticket for this ..<br>
<br>
<a href="https://devel.rtems.org/ticket/2962" rel="noreferrer noreferrer" target="_blank">https://devel.rtems.org/ticket/2962</a><br>
<br>
... and while I can do some I cannot do them all. I am wondering if the lack of<br>
feedback is related to the process needed to tag a specific test for a BSP. I am<br>
considering a tool to make the process simpler. It also takes time to examine<br>
the results and to determine if expected-fail is suitable.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Some of the failures I see on mips are very clearly lack of support for something on that architecture. This will be common if we get results on anything which doesn't have TLS, dynamic loading, or debugging support.</div><div dir="auto"><br></div><div dir="auto">Addressing this type of issue at a level higher than individual bsps would be helpful.</div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
The test controls are documented here ....<br>
<br>
<a href="https://docs.rtems.org/branches/master/user/testing/tests.html#test-controls" rel="noreferrer noreferrer" target="_blank">https://docs.rtems.org/branches/master/user/testing/tests.html#test-controls</a><br>
<br>
Chris<br>
_______________________________________________<br>
devel mailing list<br>
<a href="mailto:devel@rtems.org" target="_blank" rel="noreferrer">devel@rtems.org</a><br>
<a href="http://lists.rtems.org/mailman/listinfo/devel" rel="noreferrer noreferrer" target="_blank">http://lists.rtems.org/mailman/listinfo/devel</a></blockquote></div></div></div>