Using model checking to do test generation for RTEMS

Andrew Butterfield Andrew.Butterfield at scss.tcd.ie
Mon Sep 14 15:18:41 UTC 2020


Dear Chris,


> On 7 Sep 2020, at 04:21, Chris Johns <chrisj at rtems.org> wrote:
> 
>  ... stuff deleted ...

>>> How is the log from the test output used to validate the model?
>> 
>> The test output shows what the test did, so it can be used by someone familiar
>> with the
>> requirements to judge if the test expectations of correct behaviour are
>> themselves correct.
>> So, yes, they can help in that regard.
> 
> Just so I understand, if the model and the API do not match the test will report
> a failure?

Yes  - I'd expect early test fails to expose hidden assumptions in our models,
rather than identifying true bugs. I would expect to get some developer feedback
to assist in fixing these. Hopefully we will converge to the truth fairly quickly !

> 
>>> I would like to see this work performed on a tier-1 architecture [1]. At this
>>> point in time SPARC (and LEON) is not tier-1 because there are no test results
>>> from hardware posted to build @ rtems.org <http://rtems.org>. This leaves the
>>> ESA pre-qual project
>>> with some choices. You could select a suitable target like a beagleboneblack and
>>> post hardware test results or encourage other members of the pre-qual project to
>>> run the testsuite on a LEON and post the results. We could then move SPARC and
>>> LEON to tier-1, something I personally would love to see.
>> 
>> In one sense the ESA project has no choice - we have to work with
>> leon3/gr712rc/gr740.
> 
> That is fine. We understand few people often see the real hardware with flight
> software development. We are relaxed on how an architecture can reach tier-1, we
> need only one device from a family to run the tests with the results being
> published. My desire is to automatically monitor the test results and age out
> older results.
> 
>> I should point out that a test generated by us has been run (standalone) on
>> gr740 hardware at ESTEC.
>> Would it help if the results of that was posted? 
> 
> The test output needs to be posted using the public rtems-tools.git `rtems-test`
> command. For public test results to be meaningful the tools used to generate the
> results need to be public.

That is certainly the plan. 
> 
>> My understanding is that as our project rolls out we will be running hardware tests.
> 
> This is understandable and what I would expect. The role of tiers in RTEMS is to
> bring the results out into the public so all users can view where each
> architecture and device family sits. I personally see value for hardware vendors
> having devices in tier-1 and I see value for large organisation like ESA
> expecting to see their selected hardware in tier-1. You and I will not make this
> happen.
> 
>> However, the way I built the test executables was using waf configured for the
>> above three BSPs, and so it should be possible to do it for another tier-1 architecture.
> 
> If the tests end up in the testsuite the tests will be required to pass on all
> supported architectures. We consider the range of architecture we support a real
> feature because the code needs to be clean.

I am currently looking at the Events Manager API top-down, to produce models and tests.
I see no reason why these would be biased towards any particular BSP, and would hope to
run these generated tests on a wide range of BSPs at some stage.


Andrew

--------------------------------------------------------------------
Andrew Butterfield     Tel: +353-1-896-2517     Fax: +353-1-677-2204
Lero at TCD, Head of Software Foundations & Verification Research Group
School of Computer Science and Statistics,
Room G.39, O'Reilly Institute, Trinity College, University of Dublin
                         http://www.scss.tcd.ie/Andrew.Butterfield/
--------------------------------------------------------------------

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/users/attachments/20200914/6e3fe369/attachment.html>


More information about the users mailing list