Odd Qualification Related Documentation
Sebastian Huber
sebastian.huber at embedded-brains.de
Thu Aug 20 14:41:14 UTC 2020
Hello Joel,
On 20/08/2020 15:12, Joel Sherrill wrote:
> Hi
>
> I am in the tail end of a customer's Level C (headed to B) project and
> we are tidying up documents that I don't recall having had a lot of
> discussion. With all the work on the Software Engineering Guide and
> automated generation, most of this likely already is addressed to some
> extent but we may want to review what we have or have planned versus
> expectations.
>
> Is there a plan with all the generation of code and documentation to
> generate Software Test Plan ? I would suspect that a traditional
> Software Test Plan falls nicely as a chapter inside the Engineering Guide.
>
> What about a Software Test Description? Usually a Software Test
> Description is more painful as it focuses on what's in each test
> executable and, sometimes, each specific unit test.
I am not sure what a Software Test Plan and Description is in your case,
but I would like to specify the validation tests using the following items:
https://docs.rtems.org/branches/master/eng/req/items.html#spectypetestcaseitemtype
https://docs.rtems.org/branches/master/eng/req/items.html#spectypeactionrequirementitemtype
The validation tests are organized in test suites:
https://docs.rtems.org/branches/master/eng/req/items.html#spectypetestsuiteitemtype
From these items test code is generated with Doxygen markup, for example:
https://git.rtems.org/sebh/rtems.git/tree/testsuites/validation/tr-event-constant.c?h=qual-next#n54
Here is an example of the test code generated for an action requirement:
https://git.rtems.org/sebh/rtems.git/tree/testsuites/validation/tr-event-send-receive.c?h=qual-next
We want to use this information also to generate ECSS-specific documents.
>
> What about Software Test Reports? The rtems-tester output is the
> starting point but I don't know how that compares against the
> qualification expectations for this report.
At least for ECSS, the report generated by the rtems-test is not
detailed enough. To address this issue I implemented the RTEMS Test
Framework:
https://docs.rtems.org/branches/master/eng/test-framework.html#the-rtems-test-framework
The goal is to verify that the test output matches with the test
specification. For example, in the test specification you have something
like this:
- action: |
options = 0;
options |= RTEMS_EVENT_ANY;
options |= RTEMS_NO_WAIT;
checks:
- check: |
T_step_eq_int( ${step}, PopCount( options ), 2 );
description: |
Check that the count of set bits in the calculated value is equal
to the
count of non-default option constants. Since each non-default option
constant is a power of two, this proves that each constant has a
unique
value.
links:
- role: validation
uid: ../unique
- role: validation
uid: /if/rtems/options/event-any
- role: validation
uid: /if/rtems/options/no-wait
description: |
Calculate the bitwise or of all non-default option constants.
links: []
Each check can have links to requirements it validates. The test plan
steps are automatically calculated by the code generator via the ${step}
substitution. The test output to specification verification program can
check that the reported test steps match with the specified steps. It
can also check that the step was produced by the right source file and
line of file.
>
> Then there is the Software Version Description that goes along with a
> release. We would need to compare that also.
Yes, we need this also for ECSS. I have no detailed plan for this yet.
>
> Just curious since these are things we do but may not have laid out
> even plans for what should be the goal. If we need to seek outside
> guidance again, it would be good to know that and prepare a
> plan/checklist for what is missing.
It would be good if you could review the work. We should make sure that
it is good enough to also produce the documents required by NASA or
whatever.
More information about the devel
mailing list