Placement of new unit and validation tests?
Chris Johns
chrisj at rtems.org
Fri Dec 13 12:47:30 UTC 2019
On 13/12/19 5:49 pm, Sebastian Huber wrote:
> On 12/12/2019 17:10, Chris Johns wrote:
>> On 13/12/19 12:56 am, Sebastian Huber wrote:
>>> I would like to write new test cases and programs using the RTEMS Test
>>> Framework:
>>>
>>> https://docs.rtems.org/branches/master/eng/test-framework.html
>>
>> I have no specific issue with the test framework as a way forward however I am
>> not yet comfortable about the integration of these new types of tests into the
>> eco-system plus what happens to the existing tests?
>
> I think we should keep the existing tests as is. They are a big asset. You never
> know if a new test is identical to existing ones. Maybe the unit test like tests
> should move to "testsuites/unit" if someone has enough spare time.
Yes this is a good approach. It would seem we need to maintain support in the
eco-system for both which is what I was wanting to understand.
>> Currently the `rtems-test` command is the only supported way we have to run the
>> tests on simulators and hardware and I am busy promoting and encouraging users
>> in our community to integrate to this tool and to use it to validate their
>> systems. I feel it is important we maintain this command and what it does.
>
> Yes, this is also my point of view.
Great.
>> I am concerned I am expected to integrate these new tests into the `rtems-test`
>> command. It would be nice if you could explain what needs to be done in the
>> eco-system to integrate these type of tests and how you think this work will be
>> done?
>
> The new test still produce the begin/end of test markers. So for now nothing
> needs to change.
Perfect.
> In the long run it would be good to enhance the test output
> parser to generate detailed test reports from it.
OK
> For a proof of concept see:
>
> https://ftp.rtems.org/pub/rtems/people/sebh/test-report.pdf
This is really nice.
> This has to develop over time. We need also solutions for the test plans and the
> traceability to requirements. Ideally, we should be able to get a documentation
> set for a particular target in HTML which provides cross-references throughout
> the documentation, e.g. from requirement to test plan to test code to test
> report and vice versa.
This is fantastic and I look forward to us having this feature.
>>> There will be two sets of tests. One set are unit tests. The other set are
>>> validation tests. The validation tests check that the implementation meets its
>>> requirements. The unit tests don't need a link to requirements. They can be used
>>> to test internal APIs like the chains, red-black trees, etc. independent of
>>> requirements.
>>>
>>> I suggest to add "testsuites/unit" for unit tests and "testsuites/validation"
>>> for validation tests.
>>
>> I am not sure about these names but I cannot think of anything better. I see
>> below you will provide another layer of directories in these paths. That is good.
>>
>>> For validation tests we need a link from test cases to requirements. This needs
>>> to be discussed separately.
>>
>> Sure. It would be good if the eco-system aspects are discussed as well.
>>
>>> With the RTEMS Test Framework test cases can be written independently from the
>>> test runner program defining the test suite. This could be used for example to
>>> run a test case in different contexts. Another option is to group test cases
>>> depending on the available RAM in the target system, e.g. bigger systems can
>>> execute more test cases at once. This gives raise to a different organization
>>> compared to the existing tests programs.
>>
>> Can you please explain how rtems-test is to handle integrated tests?
>
> The rtems-test command can just grab the *.exe files and run them.
I was wondering if there is a single BEGIN/END pair for all tests or is there
one per test in the integrated executable? I am wondering how rtems-test tracks
the integrated tests.
>> How do you manage possible cross-talk between tests?
>
> You can plug in checkers which ensure that a particular system state before and
> after the test case didn't change:
>
> https://docs.rtems.org/branches/master/eng/test-framework.html#test-case-resource-accounting
OK.
> https://git.rtems.org/rtems/tree/testsuites/libtests/ttest01/init.c#n253
>
> This helps to ensure that test cases have no persistent effects influencing test
> cases which are run after them.
>
> There are test fixtures which help you to setup and tear down a test case:
>
> https://docs.rtems.org/branches/master/eng/test-framework.html#test-fixture
>
> It is currently not implemented, but it is possible to randomly reorder the test
> cases and run the test suite again. You can do this multiple times. If test
> suite results change, then some test cases have persistent effects.
Nice.
>> I think the approach is sound as the time to load a test is often much more than
>> the time the test runs.
>>
>>> I suggest to add topic directories to "testsuites/unit" and
>>> "testsuites/validation", e.g. "testsuites/unit/chains". Test case files use a
>>> naming scheme of "tc-${topic}-${name}.c" and are placed in the corresponding
>>> topic directory. Test suite files use a naming scheme of "ts-${name}.c" and are
>>> placed in "testsuites/unit" or "testsuites/validation".
>>
>> Is ${name} a name of a C file selected by the developer? Can there be more than
>> one ${name}s ?
>
> Yes, the ${name} is selected by the developer. You can have as many names as you
> want.
Thanks
Chris
More information about the devel
mailing list