[PATCH 2/3] Add the T Test Framework

Chris Johns chrisj at rtems.org
Wed Mar 20 00:44:59 UTC 2019


On 19/3/19 7:10 pm, Sebastian Huber wrote:
> Hello Chris,
> 
> On 19/03/2019 06:17, Chris Johns wrote:
>> Hi,
>>
>> I am not sure how this fits in with what we have so before I can support any
>> changes of this scale and direction I need to understand:
>>
>> 1. Are all existing tests to be converted to this new framework?
> 
> the main purpose for the new framework is to write unit, integration and
> validation tests for the RTEMS SMP pre-qualification activity. 

I do not get the "RTEMS SMP" thing and it is confusing to me. There is RTEMS and
there is a feature called SMP that you can enable along with other features of
RTEMS. Your project is a qualification or pre-qualification effort for RTEMS
with a focus on SMP. I would like to see us stop using "RTEMS SMP" as it's use
would mean a qual effort for a non-SMP kernel would not align with the results
your efforts are building. I have also mentioned this to Edsoft in other thread
which I think has wandered off list for some reason. I have a concern the
excessive use of "RTEMS SMP" to mean this project will taint the search engines
in way we will not want. I would prefer we all start to deal with this topic as
RTEMS Pre-Qualification or some shortened version and if you want you can add
"for SMP" when discussing the specific focus for ESA. Can we please start to do
this?

> I am not sure what we do with the existing test suite. 

Then I am not sure what to do with this patch and proposed path. Both are linked
and have always been so.

> One option is to pick up existing tests and covert them. 

Have you evaluated what is required to update and clean up the existing tests?

Is there an understanding of which existing tests could be updated to meet your
project goals?

Is there an understanding of what existing tests may cover requirements you
develop? What happens to the existing tests that are not covered by requirements
because they do not overlap with the profile you are tasked to complete?

How will coverage be handled? Currently there are coverage report levels that
need to be maintained or accounted for. I cannot see how one set of tests for
coverage analysis and another to match requirements can be made to work.

What happens if there is conflict in results for parts that are duplicated?

I am confused by what I see is a pretty basic conflict. Either you need to
accept some of the existing tests or you will need to repeat what is in some of
those tests? If you are required to accept the existing tests as is then I am
unsure what is being offered here and if you need to repeat pieces or fragments
of tests then I would be concerned as the testsuite would become fragmented with
the repeated pieces.

What I am starting to see here is over 600 existing tests that may not be
viewable by your "integration and validations tests" artifact generation process
and think this is an issue that needs to be resolved to move forward. I have no
technical issue with what is being offered here, I am however concerned about
the long term project issues that arise. I cannot allow this change in and then
the possibility of tests appearing where the project's needs to review each one
to determine what overlaps and conflicts with the existing testsuite.

> How this is organized should be discussed in a separate thread

I am not driving this. The parts are linked as I stated above. I cannot accept
these changes in pieces without understanding and accepting the whole concept.

>>
>> 2. How does this effect the existing eco-system support such as the `rtems-test`
>> command and the documentation around that command?
> 
> The rtems-test command just looks at the begin and end of test messages. You can
> still use it with the new framework, see the test output of the example test:
> 
> *** BEGIN OF TEST TTEST 1 ***
> *** TEST VERSION: 5.0.0.286e9354e008b08983e6390a68f8ecc5071de069
> *** TEST STATE: EXPECTED-PASS
> *** TEST BUILD: RTEMS_DEBUG RTEMS_NETWORKING RTEMS_POSIX_API RTEMS_SMP
> *** TEST TOOLS: 7.4.0 20181206 (RTEMS 5, RSB
> e0aec65182449a4e22b820e773087636edaf5b32, Newlib 1d35a003f)
> A:ttest01
> S:Platform:RTEMS
> S:Compiler:7.4.0 20181206 (RTEMS 5, RSB
> e0aec65182449a4e22b820e773087636edaf5b32, Newlib 1d35a003f)
> S:Version:5.0.0.286e9354e008b08983e6390a68f8ecc5071de069
> S:BSP:erc32
> S:RTEMS_DEBUG:1
> S:RTEMS_MULTIPROCESSING:0
> S:RTEMS_POSIX_API:1
> S:RTEMS_PROFILING:0
> S:RTEMS_SMP:1
> B:example
> P:0:0:UI1:test-example.c:5
> F:1:0:UI1:test-example.c:6:test fails
> F:*:0:UI1:test-example.c:8:quiet test fails
> P:2:0:UI1:test-example.c:9
> F:3:0:UI1:test-example.c:10:step test fails
> F:4:0:UI1:test-example.c:11:this is a format string
> E:example:N:5:F:4:D:0.001000
> Z:ttest01:C:1:N:5:F:4:D:0.003000
> 
> *** END OF TEST TTEST 1 ***
> 
> What you get in addition is a structured output in the middle. This can be used
> to generated more detailed reports. This is another topic. The test framework
> should just enable you to easily parse the test output and do something with it.

This is great and I welcome it. I was thinking we needed a way to capture the
per test output as a sort of screen capture update for the existing tests.

>> 3. What does 'T' in THE_T_TEST_FRAMEWORK_H stand for? I prefer we prefix RTEMS_
>> where it makes sense.
> 
> The 'T' is just a random name which associates with testing. I searched a bit
> for <t.h> and a T_ prefix and didn't found an existing project. So, there should
> be no name conflicts. It is short, so this is good for typing. 

A number of pieces I have recently added can be standalone and I have added
RTEMS as a prefix to help get the project's name out there.

> The T Test
> Framework is portable. It runs also on Linux, FreeBSD, and MSYS2. I will
> probably add it also as a stand a lone project to github.

Which would be the master implementation?

>> 4. I see in another email you post a Sphinx generated report. What are those
>> tests and what is used to capture and create that report and will this in time
>> include all existing tests?
> 
> I wrote a very simple and stupid Python script to extract this information from
> the test output just to evaluate if the output format makes sense. The output is
> from some example tests I used to test the framework. 

I think the idea of generating ReST format to create documents is really nice.

> For the pre-qualification
> we need test plans, tests, test reports, traceability to requirements and test
> verification. 

Yes, this is understood and why I am including the existing testsuite tests.
There are complications around this area that need to be resolved.

> The framework enables you to efficiently write test code and
> generation of easy to parse output.

I wish it could be used on all tests.

Chris


More information about the devel mailing list