SMP related question

Chris Johns chrisj at rtems.org
Thu Aug 23 06:49:10 UTC 2018


On 23/08/2018 15:26, Sebastian Huber wrote:
> On 23/08/18 03:05, Chris Johns wrote:
>> On 22/08/2018 21:07, Sebastian Huber wrote:
>>> I think a long term solution is to use a new test framework which produces
>>> machine readable output.
>>>
>>> https://devel.rtems.org/ticket/3199
>>>
>>> The only machine readable output is currently the begin/end of test message.
>> I am not sure this is a high priority task. At one point, a decade or two ago, I
>> felt it was really important we check the output so it matched the screen dumps
>> and considered annotating the screen dumps to indicate the types of data that
>> can vary. When implementing 'rtems-test' it became clear to me the only thing
>> that matters is the tests final result and matching that to the expected result.
> 
> From my point of view the existing test suite with the begin/end of test
> messages is fine.
> 
>>
>> Machine readable output complicates working with tests. I am also concerned such
>> a change may increase the size of all tests.
>>
>> A test that returns a PASS when there is a failure is a bug in that test. Any
>> internally generated test output is a "don't care". This of course excludes the
>> test markers that surround each test.
>>
>> I see the ability to analysis a test's result to determine if it is working as a
>> separate problem. We have tests that are too verbose and tests that print
>> nothing. Either situation does not overly bother me.
>>
>> Joel has said in the past, what is more important is creating a complete list of
>> what each test is testing and maintaining that data. I agree. I would like to
>> add an up to date 'expected fail' list for each arch would be good to have.
> 
> A machine readable test output helps to prove that a test case did actually run.
> Otherwise you need some code coverage information to show this.

We have coverage analysis tools. They are a work in progress but we have them
and the last GSoC plus my DWARF effort have moved things along.

> For example, lets assume you have a table of 3 test cases
> 
> test_case tests[] = { 1, 2, 3 }
> 
> and then a loop
> 
> for (i = 0; i < 3; ++i)
> 
> to execute the test cases. Someone adds test case 4 to the table and thinks he
> is done. He forgot to change the loop statement. You still get the end of test
> message, but test case 4 was not executed. With machine readable test output and
> someone who knows which tests should execute this would get noticed.

Is this just a bug in the test? I assume the machine readable output needs to be
verified against something else so are we adding new places bugs can appear and
more places things have to be right?

I feel this is about the quality of the tests more than any framework. Adding a
new framework to repackage the same invalid data does not change anything.

> For the "someone who knows which tests should execute" we need a test plan. The
> test plan could be added to the test sources as special comments. We could use
> these comments to generate a test plan document and other things.

Sure but this is heading off topic. I would prefer this discussion head off to
somewhere that handles certification or verification. I am more than happy to
hear about a complete solution and for this I mean a proposal that is complete,
open and end to end. The use of machine readable output is a valid part of that
discussion.

Chris



More information about the users mailing list