Tiering, Expected Failures, and BSPs which Run on Multiple Sims/HW
Chris Johns
chrisj at rtems.org
Sat Oct 26 20:55:16 UTC 2019
On 26/10/19 6:57 pm, Thomas Doerfler wrote:
> Joel,
>
> it all comes down to "where was it tested". Knowing a certain test works
> on ONE architecture, on ONE member of a BSP family never means it works
> on all. Surely if it fails on one member it makes other members's
> support suspicious, but the result is always for the hardware/simulator
> it ran on.
>
> If a BSP covers various variants and boards/simulators, you must test it
> on all variants to find out whether they work. OTOH, if a BSP at least
> succeeds on one supported HW, it is worth keeping it actively in the tree.
>
> So I still think that we should track test results per
> arch.bsp.variant.HW, not only per arch.bsp
>
> And we should apply the tiering scheme to the HW, not only the BSP.
A BSP should be specific to a piece of hardware. A BSP that is for simulation
only will never reach tier 1.
A BSP like the xilinx_zynq_zc702 (or the zc706) variant can be effected by
BSPOPTS localized clock settings and other factors however we currently do not
have any way to capture the those options in build for a test to print. The key
to these BSPs is the base SoC is the same.
The RISCV is harder problem. I suppose we can assume the tests are being run on
a suitably validated implementation.
> But, yes, it's hard.
>
> Thomas.
>
>>
>> The tester makes a distinction (e.g. leon3-sis vs leon3-tsim or
>> leon3-qemu) but they
>> all test the same binaries with different results. The set of expected
>> failures on each
>> of those could be different.
>>
>> I don't have a good answer. We currently think of three levels:
>>
>> + architecture
>> + BSP Family
>> + BSP Variant
>>
>> and now we are adding the "what did we run that test on" variant for
>> record keeping purposes
>>
>> AFAIK The current .tcfg files control only the set of tests that would
>> be considered permanent
>> failures across all "runner" variants.
Currently only a single expected fail record can exist. I am not sure if you
could add some form of tagging to a specific instance of an excepted fail
record. If you did decide to add a tag of some form I suspect you would need to
develop a consistent way to describe the test platform and then add the support
to the rtems-test infrastructure.
Also the ranking of a BSP against the test platform would need to be evaluated.
I suggest a rank based on the failures and the lowest number of expect failures
that have failed.
Chris
>>
>> This is just hard. :(
>>
>> --joel
>>
>>
>>
>> Kind regards,
>>
>> Thomas.
>>
>> >
>> > _______________________________________________
>> > devel mailing list
>> > devel at rtems.org <mailto:devel at rtems.org>
>> > http://lists.rtems.org/mailman/listinfo/devel
>> >
>>
>> --
>> --------------------------------------------
>> embedded brains GmbH
>> Thomas Doerfler
>> Dornierstr. 4
>> D-82178 Puchheim
>> Germany
>> email: Thomas.Doerfler at embedded-brains.de
>> <mailto:Thomas.Doerfler at embedded-brains.de>
>> Phone: +49-89-18 94 741-12
>> Fax: +49-89-18 94 741-09
>> PGP: Public key available on request.
>> For our privacy statement, see
>> https://embedded-brains.de/en/data-privacy-statement/
>> _______________________________________________
>> devel mailing list
>> devel at rtems.org <mailto:devel at rtems.org>
>> http://lists.rtems.org/mailman/listinfo/devel
>>
>
More information about the devel
mailing list