[PATCH 2/3] Add the T Test Framework

Chris Johns chrisj at rtems.org
Wed Mar 20 22:18:00 UTC 2019


On 20/3/19 7:50 pm, Sebastian Huber wrote:
> On 20/03/2019 01:44, Chris Johns wrote:
>> On 19/3/19 7:10 pm, Sebastian Huber wrote:
>>> Hello Chris,
>>>
>>> On 19/03/2019 06:17, Chris Johns wrote:
>>>> Hi,
>>>>
>>>> I am not sure how this fits in with what we have so before I can support any
>>>> changes of this scale and direction I need to understand:
>>>>
>>>> 1. Are all existing tests to be converted to this new framework?
>>> the main purpose for the new framework is to write unit, integration and
>>> validation tests for the RTEMS SMP pre-qualification activity.
>> I do not get the "RTEMS SMP" thing and it is confusing to me. There is RTEMS and
>> there is a feature called SMP that you can enable along with other features of
>> RTEMS. Your project is a qualification or pre-qualification effort for RTEMS
>> with a focus on SMP. I would like to see us stop using "RTEMS SMP" as it's use
>> would mean a qual effort for a non-SMP kernel would not align with the results
>> your efforts are building. I have also mentioned this to Edsoft in other thread
>> which I think has wandered off list for some reason. I have a concern the
>> excessive use of "RTEMS SMP" to mean this project will taint the search engines
>> in way we will not want. I would prefer we all start to deal with this topic as
>> RTEMS Pre-Qualification or some shortened version and if you want you can add
>> "for SMP" when discussing the specific focus for ESA. Can we please start to do
>> this?
> 
> Sorry, this is just a habit developed over two years or so. ESA has some RTEMS
> variants and we needed a name for the new activity. I will try to omit the SMP
> in the future.

Thank you.

>>> I am not sure what we do with the existing test suite.
>> Then I am not sure what to do with this patch and proposed path. Both are linked
>> and have always been so.
> 
> I wrote a couple of tests in the past and I think this new test framework would
> have made this job easier. Actually, I regret that I didn't invest the time to
> write it 10 years before, since I am pretty sure it would have paid off quite
> quickly.

Yes I agree.

> Even without this pre-qualification activity it makes sense to think
> about how test code could be written more efficiently with better output.

Agreed but I also understand a project like this has limits and it is these
areas I am exploring to see what is left for the project to manage.

>>> One option is to pick up existing tests and covert them.
>> Have you evaluated what is required to update and clean up the existing tests?
>>
>> Is there an understanding of which existing tests could be updated to meet your
>> project goals?
> 
> There are a lot of open questions. The feature set we want to pre-qualify (RTEMS
> SMP space profile) is not yet fully defined and depends a bit on the survey
> outcome. 

OK.

> We don't have a budget to pre-qualify everything which is in the RTEMS
> code base. Definitely out of scope are file systems and the network stack.

That makes sense.

>> Is there an understanding of what existing tests may cover requirements you
>> develop? What happens to the existing tests that are not covered by requirements
>> because they do not overlap with the profile you are tasked to complete?
>>
>> How will coverage be handled? Currently there are coverage report levels that
>> need to be maintained or accounted for. I cannot see how one set of tests for
>> coverage analysis and another to match requirements can be made to work.
> 
> This are all important questions, but I think they are unrelated to the test
> framework itself.

That depends on how it is used.

> We should do this incrementally. At some point we have to write tests, e.g. for
> example for the Classic semaphores. We could then have a look at the existing
> tests and step by step convert and refactor them to use the new framework along
> with test plans. For each patch in the tests the impact on the coverage needs to
> be evaluated. We definitely should not make things worse in terms of coverage.

Fantastic to hear and thank you. This makes sense to me.

> For code coverage in ECSS you can use all sorts of tests (unit, integration,
> validation, whatever).

OK.

>> What happens if there is conflict in results for parts that are duplicated?
>>
>> I am confused by what I see is a pretty basic conflict. Either you need to
>> accept some of the existing tests or you will need to repeat what is in some of
>> those tests? If you are required to accept the existing tests as is then I am
>> unsure what is being offered here and if you need to repeat pieces or fragments
>> of tests then I would be concerned as the testsuite would become fragmented with
>> the repeated pieces.
> 
> I don't want a test suite fragmentation which results from copy and paste with
> modifications. However, we should be careful not to reduce test coverage through
> a conversion.

Agreed.

> After the activity there will be still a mix of tests written in
> the new framework and existing tests, e.g. we will likely not touch the file
> system tests.

I am sure. It is reducing what is left and managing it as we go that is important.

>> What I am starting to see here is over 600 existing tests that may not be
>> viewable by your "integration and validations tests" artifact generation process
>> and think this is an issue that needs to be resolved to move forward. I have no
>> technical issue with what is being offered here, I am however concerned about
>> the long term project issues that arise. I cannot allow this change in and then
>> the possibility of tests appearing where the project's needs to review each one
>> to determine what overlaps and conflicts with the existing testsuite.
> 
> Yes, reviewing every test conversion patch at the detail level would be
> infeasible from my point of view. A typical patch will probably remove some test
> code in the existing test suite and add new test code somewhere else. There will
> be probably more than a hundred of these patches. We should make sure that the
> patches are self-contained and potentially easy to review. Quality goals should
> be defined, e.g. code coverage must not get worse through a test code patch.

Thank you for this. It is sensible and practical.

>>> How this is organized should be discussed in a separate thread
>> I am not driving this. The parts are linked as I stated above. I cannot accept
>> these changes in pieces without understanding and accepting the whole concept.
> 
> I think it is good to integrate things as early as possible. A mega patch after
> several months which covers test framework, test conversions, code changes,
> documentation changes, requirements, etc. is probably not helpful.

I agree. This last post has given me a view to what is coming as a result of
this change. This has been important.

>>>> 2. How does this effect the existing eco-system support such as the
>>>> `rtems-test`
>>>> command and the documentation around that command?
>>> The rtems-test command just looks at the begin and end of test messages. You can
>>> still use it with the new framework, see the test output of the example test:
>>>
>>> *** BEGIN OF TEST TTEST 1 ***
>>> *** TEST VERSION: 5.0.0.286e9354e008b08983e6390a68f8ecc5071de069
>>> *** TEST STATE: EXPECTED-PASS
>>> *** TEST BUILD: RTEMS_DEBUG RTEMS_NETWORKING RTEMS_POSIX_API RTEMS_SMP
>>> *** TEST TOOLS: 7.4.0 20181206 (RTEMS 5, RSB
>>> e0aec65182449a4e22b820e773087636edaf5b32, Newlib 1d35a003f)
>>> A:ttest01
>>> S:Platform:RTEMS
>>> S:Compiler:7.4.0 20181206 (RTEMS 5, RSB
>>> e0aec65182449a4e22b820e773087636edaf5b32, Newlib 1d35a003f)
>>> S:Version:5.0.0.286e9354e008b08983e6390a68f8ecc5071de069
>>> S:BSP:erc32
>>> S:RTEMS_DEBUG:1
>>> S:RTEMS_MULTIPROCESSING:0
>>> S:RTEMS_POSIX_API:1
>>> S:RTEMS_PROFILING:0
>>> S:RTEMS_SMP:1
>>> B:example
>>> P:0:0:UI1:test-example.c:5
>>> F:1:0:UI1:test-example.c:6:test fails
>>> F:*:0:UI1:test-example.c:8:quiet test fails
>>> P:2:0:UI1:test-example.c:9
>>> F:3:0:UI1:test-example.c:10:step test fails
>>> F:4:0:UI1:test-example.c:11:this is a format string
>>> E:example:N:5:F:4:D:0.001000
>>> Z:ttest01:C:1:N:5:F:4:D:0.003000
>>>
>>> *** END OF TEST TTEST 1 ***
>>>
>>> What you get in addition is a structured output in the middle. This can be used
>>> to generated more detailed reports. This is another topic. The test framework
>>> should just enable you to easily parse the test output and do something with it.
>> This is great and I welcome it. I was thinking we needed a way to capture the
>> per test output as a sort of screen capture update for the existing tests.
>>
>>>> 3. What does 'T' in THE_T_TEST_FRAMEWORK_H stand for? I prefer we prefix RTEMS_
>>>> where it makes sense.
>>> The 'T' is just a random name which associates with testing. I searched a bit
>>> for <t.h> and a T_ prefix and didn't found an existing project. So, there should
>>> be no name conflicts. It is short, so this is good for typing.
>> A number of pieces I have recently added can be standalone and I have added
>> RTEMS as a prefix to help get the project's name out there.
> 
> Writing tests code is easier if the function names are short and have a specific
> prefix (this helps auto-completion through code editors, you just have to type
> T_... and not rtems_...). 

Sure. Can we call this RTEMS T Test Framework and drop the initial "The"? At
least the docs will have some reference. :) :)

> A specific prefix makes it easier to grasp what is
> test framework code and what are functions under test, e.g.
> 
> T_TEST_CASE(timer)
> {
>     rtems_status_code sc;
>     rtems_id id;
>     rtems_id task;
>     rtems_event_set events;
> 
>     T_plan(8);
>     T_step_true(0, T_is_runner(), "test body is not runner");
> 
>     sc = rtems_timer_create(rtems_build_name('T', 'E', 'S', 'T'), &id);
>     T_step_rsc_success(1, sc);
> 
>     task = rtems_task_self();
>     sc = rtems_timer_fire_after(id, 1, wakeup, &task);
>     T_step_rsc_success(2, sc);
> 
>     events = 0;
>     sc = rtems_event_receive(RTEMS_EVENT_0, RTEMS_WAIT | RTEMS_EVENT_ALL,
>         RTEMS_NO_TIMEOUT, &events);
>     T_step_rsc_success(5, sc);
>     T_step_eq_u32(6, events, RTEMS_EVENT_0);
> 
>     sc = rtems_timer_delete(id);
>     T_step_rsc_success(7, sc);
> }
> 
> vs.
> 
> T_TEST_CASE(timer)
> {
>     rtems_status_code sc;
>     rtems_id id;
>     rtems_id task;
>     rtems_event_set events;
> 
>     rtems_plan(8);
>     rtems_step_true(0, rtems_is_runner(), "test body is not runner");
> 
>     sc = rtems_timer_create(rtems_build_name('T', 'E', 'S', 'T'), &id);
>     rtems_step_rsc_success(1, sc);
> 
>     task = rtems_task_self();
>     sc = rtems_timer_fire_after(id, 1, wakeup, &task);
>     rtems_step_rsc_success(2, sc);
> 
>     events = 0;
>     sc = rtems_event_receive(RTEMS_EVENT_0, RTEMS_WAIT | RTEMS_EVENT_ALL,
>         RTEMS_NO_TIMEOUT, &events);
>     rtems_step_rsc_success(5, sc);
>     rtems_step_eq_u32(6, events, RTEMS_EVENT_0);
> 
>     sc = rtems_timer_delete(id);
>     rtems_step_rsc_success(7, sc);
> }
> 
> I think promoting the name RTEMS is not as important.
> 
>>
>>> The T Test
>>> Framework is portable. It runs also on Linux, FreeBSD, and MSYS2. I will
>>> probably add it also as a stand a lone project to github.
>> Which would be the master implementation?
> 
> The core code is portable. It needs a bit of work to keep two repositories in
> synchronization.

OK.

> 
>>
>>>> 4. I see in another email you post a Sphinx generated report. What are those
>>>> tests and what is used to capture and create that report and will this in time
>>>> include all existing tests?
>>> I wrote a very simple and stupid Python script to extract this information from
>>> the test output just to evaluate if the output format makes sense. The output is
>>> from some example tests I used to test the framework.
>> I think the idea of generating ReST format to create documents is really nice.
>>
>>> For the pre-qualification
>>> we need test plans, tests, test reports, traceability to requirements and test
>>> verification.
>> Yes, this is understood and why I am including the existing testsuite tests.
>> There are complications around this area that need to be resolved.
>>
>>> The framework enables you to efficiently write test code and
>>> generation of easy to parse output.
>> I wish it could be used on all tests.
> 
> At some point you have to get started.
> 

Yes this is true.

Chris


More information about the devel mailing list