New validation test suites
Chris Johns
chrisj at rtems.org
Thu Dec 16 03:51:14 UTC 2021
On 16/12/21 3:27 am, Sebastian Huber wrote:
> On 15/12/2021 06:46, Chris Johns wrote:
>> On 14/12/21 6:24 pm, Sebastian Huber wrote:
>>> Hello Chris,
>>>
>>> On 13/12/2021 22:01, Chris Johns wrote:
>>>> On 14/12/21 1:53 am, Sebastian Huber wrote:
> [...]
>>>>> We finished the specification of the pre-qualified RTEMS feature set. The
>>>>> specification is available in an RTEMS Project repository:
>>>>>
>>>>> https://git.rtems.org/rtems-central/tree/spec
>>>>
>>>> I had a quick look. Is there a more user friendly view of this data?
>>>>
>>>> I think the term "specification" is a little bit misleading because the data
>>>> files are not easily read by a person. I understand this is the specification
>>>> data set however it is not what I am traditionally use to seeing.
>>>
>>> You can use the "./specview.py" script to get views of the specification. For
>>> example, this command displays the transition map for the rtems_signal_send()
>>> directive:
>>
>> Is specview.py part of rtems.git?
>
> No, this script is in rtems-central. This is also the location of the
> specification items.
I am not sure linking a script from that repo like this is helpful.
>> If not part of rtems.git how much data is there for all the output? That is it
>> is generated and held in the repo with the tests?
>
> In rtems.git, there are only the generated sources.
>
> [...]
There should be no reach back to the upstream specs, scripts etc and for good
reasons. The information you posted is nice and useful and I do not wish to
release manage rtems-central to accommodate these tests in a release.
Would capturing that information with the tests be something worth doing?
>>>>> The validation tests are generated from the specification using the
>>>>> "./spec2modules.py" script and end up in the RTEMS sources of a Git
>>>>> submodule. I
>>>>> think the specification and the generation tool is now sufficiently stable so
>>>>> that the validation test code can be integrated in the RTEMS master branch.
>>>>> The
>>>>> patch set is too big for the mailing list, so you can review it here:
>>>>>
>>>>> https://git.rtems.org/sebh/rtems.git/log/?h=validation
>>>>
>>>> The link failed.
>>>
>>> Yes, viewing my personal repository no longer works. I am not sure if this is a
>>> temporary issue. This is why I added the github link.
>>
>> It seems to have been temporary. It is back again.
>>
>>>
>>>>
>>>>> https://github.com/sebhub/rtems/tree/validation
>>>>
>>>> The header in a test says the regeneration instructions are in the engineering
>>>> manual but I could not quickly find them?
>>>
>>> https://docs.rtems.org/branches/master/eng/req/howto.html#generate-content-after-changes
>>>
>>>
>>>
>>> In an earlier version of the header, we had a link which you didn't like:
>>
>> If I need to look at the formatting rules the heading "Software Development
>> Management" is easy to see and then a click on "Coding Standards" gives me what
>> I am looking for.
>>
>> To generate these headers I click on "Software Requirements Engineering" and
>> then do I just guess until I find it in the "How To" section? I am actually
>> asking this be sorted out so it is not left hanging and we are not left guessing
>> what to do. If it can be rearrange into something meaningful it would help. :)
>
> Well, if you read the text in the header:
>
> * For information on updating and regenerating please refer to the How-To
> * section in the Software Requirements Engineering chapter of the
> * RTEMS Software Engineering manual. The manual is provided as a part of
> * a release. For development sources please refer to the online
> * documentation at:
> *
> * https://docs.rtems.org
>
> You should read the How-to section or not?
Yes I should have and thanks for pointing this out but I did not see this and
the manual as it stands did not help. I think it should change. It can be
performed post this patch set but I think the documentation would read better if
changed.
>>>> What hardware have the validation tests been run on? Any tier 1 archs?
>>>
>>> I tested with the sparc/leon3 BSPs and the arm/realview_pbx_a9_qemu.
>>
>> Is the leon3 tested on hardware or simulation?
>>
>>> You need a
>>> full implementation of the new Interrupt Manager directives and a working Cache
>>> Manager implementation.
>>
>> Is this documented?
>>
>> I am sorry I do not know the list of archs and bsps that support the new
>> interrupt manager directives. Maybe it would be good to list them?
>
> All BSPs have at least a stub implementation of the new directives. The
> directives are tested in a dedicated test suite. You will notice failures in
> this test suite if the directives are not implemented.
Are these expected failures?
>>> I noticed an issue with the thread restart on aarch64/a53_lp64_qemu.
>>>
>>> On powerpc/psim there is an issue in one test case, due to:
>>>
>>> #define CPU_ALL_TASKS_ARE_FP CPU_HARDWARE_FP
>>
>> Sorry, I am not following what the issue is? Does this effect all PPC BSPS?
>
> Not all, the newer BSPs have no separate floating-point context.
Which ones have the issue, the newer BSPs or the older ones?
> This is something which needs to be fixed in the specification.
Of?
< From my point of view this is just a minor issue.
As in fixing these tests?
>>> Another issue is that the tm27 interrupt must be independent of the clock driver
>>> interrupt. This is not the case for powerpc/psim.
>>>
>>> There is definitely some work left to cover all edge cases. Some tests are quite
>>> complicated.
>>
>> Sure. I would like to understand the effects this has?
>
> Maybe I can rearrange the test cases so that the tm27 support is only used if no
> clock driver is needed. The tm27 support is used to run handlers in interrupt
> context.
OK.
>>>> Is there anything that interprets the new test output format? It looks like
>>>> lots
>>>> of great info but a little difficult to read.
>>>
>>> EDISOFT worked on a test report generator, however, it is not yet in a
>>> reviewable state.
>>
>> OK. I think something that handles this data would be good to have.
>
> Yes, maybe we could let a student work on this. In theory, this is not
> difficult. Read the report.yaml generated by the RTEMS Tester and convert it to
> a Python objects representation. Then use this high-level representation to
> generate a report in format X.
Sounds good.
And we need to get all the BSPs baselined with 0 failures so we know where we
stand as changes are being made.
Chris
More information about the devel
mailing list