New validation test suites

Chris Johns chrisj at rtems.org
Fri Dec 17 03:34:58 UTC 2021


On 16/12/21 6:36 pm, Sebastian Huber wrote:
> On 16/12/2021 04:51, Chris Johns wrote:
>> On 16/12/21 3:27 am, Sebastian Huber wrote:
>>> On 15/12/2021 06:46, Chris Johns wrote:
>>>> On 14/12/21 6:24 pm, Sebastian Huber wrote:
>>>>> Hello Chris,
>>>>>
>>>>> On 13/12/2021 22:01, Chris Johns wrote:
>>>>>> On 14/12/21 1:53 am, Sebastian Huber wrote:
>>> [...]
>>>>>>> We finished the specification of the pre-qualified RTEMS feature set. The
>>>>>>> specification is available in an RTEMS Project repository:
>>>>>>>
>>>>>>> https://git.rtems.org/rtems-central/tree/spec
>>>>>>
>>>>>> I had a quick look. Is there a more user friendly view of this data?
>>>>>>
>>>>>> I think the term "specification" is a little bit misleading because the data
>>>>>> files are not easily read by a person. I understand this is the specification
>>>>>> data set however it is not what I am traditionally use to seeing.
>>>>>
>>>>> You can use the "./specview.py" script to get views of the specification.  For
>>>>> example, this command displays the transition map for the rtems_signal_send()
>>>>> directive:
>>>>
>>>> Is specview.py part of rtems.git?
>>>
>>> No, this script is in rtems-central.  This is also the location of the
>>> specification items.
>>
>> I am not sure linking a script from that repo like this is helpful.
>>
>>>> If not part of rtems.git how much data is there for all the output? That is it
>>>> is generated and held in the repo with the tests?
>>>
>>> In rtems.git, there are only the generated sources.
>>>
>>> [...]
>>
>> There should be no reach back to the upstream specs, scripts etc and for good
>> reasons. The information you posted is nice and useful and I do not wish to
>> release manage rtems-central to accommodate these tests in a release.
>>
>> Would capturing that information with the tests be something worth doing?
> 
> I don't think it would be useful. If you want to modify the tests you should
> work with the specification items and the corresponding scripts.

This is not about modifying the tests. As I previously stated the tests provide
little detail on the verification matrix being solved.

In relation to where is best to make changes, rtems-central may be the best
place however we will accept patches to the tests as they are in rtems.git. How
that is pushed back to rtems-central is not a focus here.

> Adding the
> tables as comments would blow up the sources considerably. Some tests have about
> 50000 table entries and the table entries depend on C preprocessor defines.

Ah yes I agree and what I was wanting to understand.

I think we need to understand what a release will contain because the specs used
are not captured with a release. I appreciate the efforts to make this available
as a workflow for development and for the pre-qual process but I am now
critically examining what this means for a release. For example lets say years
after a release someone questions a test, they will only have the test source
code in the release package and I think this is a short coming. Adding the
rtems-central repo as a package to the releases source may be a solution however
this creates further issues. How do I know the master of the rtems-central and
the committed sources match when creating the release as separate pieces of
rtems.git may have been updated at different commit points in rtems-central?

I suggest we get together when we can. Contact me off-line and lets see if we
can arrange a time :)

> 
> [...]
>>>>> In an earlier version of the header, we had a link which you didn't like:
>>>>
>>>> If I need to look at the formatting rules the heading "Software Development
>>>> Management" is easy to see and then a click on "Coding Standards" gives me what
>>>> I am looking for.
>>>>
>>>> To generate these headers I click on "Software Requirements Engineering" and
>>>> then do I just guess until I find it in the "How To" section? I am actually
>>>> asking this be sorted out so it is not left hanging and we are not left
>>>> guessing
>>>> what to do. If it can be rearrange into something meaningful it would help. :)
>>>
>>> Well, if you read the text in the header:
>>>
>>>   * For information on updating and regenerating please refer to the How-To
>>>   * section in the Software Requirements Engineering chapter of the
>>>   * RTEMS Software Engineering manual.  The manual is provided as a part of
>>>   * a release.  For development sources please refer to the online
>>>   * documentation at:
>>>   *
>>>   * https://docs.rtems.org
>>>
>>> You should read the How-to section or not?
>>
>> Yes I should have and thanks for pointing this out but I did not see this and
>> the manual as it stands did not help. I think it should change. It can be
>> performed post this patch set but I think the documentation would read better if
>> changed.
> 
> Could you please make a suggestions how the text should be changed?
> 

I am not sure. I have not looked at rtems-central, the scripts or commands it
provides. I think at a minimum a section each set of generated sources in
rtems.git needs to be covered.

>>>>>> What hardware have the validation tests been run on? Any tier 1 archs?
>>>>>
>>>>> I tested with the sparc/leon3 BSPs and the arm/realview_pbx_a9_qemu.
>>>>
>>>> Is the leon3 tested on hardware or simulation?
>>>>
>>>>> You need a
>>>>> full implementation of the new Interrupt Manager directives and a working
>>>>> Cache
>>>>> Manager implementation.
>>>>
>>>> Is this documented?
>>>>
>>>> I am sorry I do not know the list of archs and bsps that support the new
>>>> interrupt manager directives. Maybe it would be good to list them?
>>>
>>> All BSPs have at least a stub implementation of the new directives. The
>>> directives are tested in a dedicated test suite. You will notice failures in
>>> this test suite if the directives are not implemented.
>>
>> Are these expected failures?
> 
> Yes, they would be expected failures. I can add the test information. For which
> BSPs should I do this?

Any we know the tests will not run on?

>>>>> I noticed an issue with the thread restart on aarch64/a53_lp64_qemu.
>>>>>
>>>>> On powerpc/psim there is an issue in one test case, due to:
>>>>>
>>>>> #define CPU_ALL_TASKS_ARE_FP CPU_HARDWARE_FP
>>>>
>>>> Sorry, I am not following what the issue is? Does this effect all PPC BSPS?
>>>
>>> Not all, the newer BSPs have no separate floating-point context.
>>
>> Which ones have the issue, the newer BSPs or the older ones?
> 
> The older ones.
> 
>>
>>> This is something which needs to be fixed in the specification.
>>
>> Of?
>>
>> < From my point of view this is just a minor issue.
>>
>> As in fixing these tests?
>>
>>>>> Another issue is that the tm27 interrupt must be independent of the clock
>>>>> driver
>>>>> interrupt.  This is not the case for powerpc/psim.
>>>>>
>>>>> There is definitely some work left to cover all edge cases. Some tests are
>>>>> quite
>>>>> complicated.
>>>>
>>>> Sure. I would like to understand the effects this has?
>>>
>>> Maybe I can rearrange the test cases so that the tm27 support is only used if no
>>> clock driver is needed. The tm27 support is used to run handlers in interrupt
>>> context.
>>
>> OK.
> 
> I will try to fix these issues, but this will delay the integration for a couple
> of weeks.

I am fine with tickets that track the work to be done.

>>>>>> Is there anything that interprets the new test output format? It looks like
>>>>>> lots
>>>>>> of great info but a little difficult to read.
>>>>>
>>>>> EDISOFT worked on a test report generator, however, it is not yet in a
>>>>> reviewable state.
>>>>
>>>> OK. I think something that handles this data would be good to have.
>>>
>>> Yes, maybe we could let a student work on this. In theory, this is not
>>> difficult. Read the report.yaml generated by the RTEMS Tester and convert it to
>>> a Python objects representation. Then use this high-level representation to
>>> generate a report in format X.
>>
>> Sounds good.
>>
>> And we need to get all the BSPs baselined with 0 failures so we know where we
>> stand as changes are being made.
> 
> All BSPs is a bit too much.
> 

Yes sorry I meant all tier 1 BSPs.

Chris


More information about the devel mailing list