Integrating the Formal Methods part of Qualification
Andrew.Butterfield at scss.tcd.ie
Andrew.Butterfield at scss.tcd.ie
Fri Jul 1 15:04:46 UTC 2022
On 1 Jul 2022, at 00:59, Chris Johns <chrisj at rtems.org<mailto:chrisj at rtems.org>> wrote:
On 28/6/2022 11:09 pm, Andrew.Butterfield at scss.tcd.ie<mailto:Andrew.Butterfield at scss.tcd.ie> wrote:
Dear RTEMS Developers,
While the validation tests from the RTEMS pre-qualification activity are
now merged into the RTEMS master, the work done in investigating and
deploying formal methods techniques is not yet merged.
The activity had two main phases: a planning phase (Nov 2018-Oct 2019)
that explored various formal techniques, followed by an execution phase
(Oct 2019-Nov 2021) that then applied selected techniques to selected
parts of RTEMS. Some discussions occurred with the RTEMS community
via the mailings lists over this time. A short third phase (Nov 2021 - Dec 2021)
then reported on the outcomes. Each phase resulted in a technical report.
The key decision made was to use Promela/SPIN for test generation, and
to apply it to the Chains API, the Event Manager, and the SMP scheduler
thread queues. This involved developing models in the Promela language
and using the SPIN model-checker tool to both check their correctness
and to generate execution scenarios that could be used to generate tests.
Tools were developed using Python 3.8 to support this. Initial documentation
about tools and how to use them was put into the 2nd phase report.
Congratulations for the work and results you and others have managed to achieve.
It is exciting to see this happening with RTEMS and being made public.
We now come to the part where we explore the best way to integrate this
into RTEMS. I am proposing to do this under the guidance of Sebastian Huber.
The first suggested step is adding in new documentation to the RTEMS
Software Engineering manual, as a new Formal Methods chapter. This would
provide some motivation, as well as a "howto".
I assume that I would initially describe the proposed changes using the patch
review process described in the section on Preparing and Sending Patches in
the User Manual.
How do you feel I should best proceed?
It is hard for me to answer until I understand what is being submitted and who
maintains it? I am sure you understand this due to the specialised nature of the
work.
Indeed, I quite agree. I have some short answers below, with suggestions.
What will be submitted, ie SPIN files, scripts, etc?
Promela/SPIN model files (ASCII text, C-like syntax)
C template files (.h,.c) to assist test code generation
YAML files to provide a mapping from model concepts to RTEMS C test code
python scripts to automate the test generation
Documentation - largely RTEMS compliant sphinx sources (.rst)
Where are you looking to add these pieces?
Everything except the documentation could live in a top-level folder ('formal') in rtems-central.
In fact there is no particular constraint from my perspective for where they can go.
The plan would be to add the pertinent parts of our project documentation into new chapters
in the RTEMS software engineering manual. So that would be eng/ in the rtems-docs repo.
How is the verification run against the code? Do we manage regression testing
and is that even possible?
The python scripts basically run SPIN in such a way as to generate scenarios that model
correct behaviour which then get turned into standard RTEMS test programs. These all
get added into a new testsuite in the rtems repo (testsuites/models, say).
They are properly integrated into the RTEMS test system, so get built and run by waf.
This is similar to how the tests generated by Sebastian in testsuites/validation are handled.
From the perspective of a user that works out of git.rtems.org/rtems<http://git.rtems.org/rtems>,
there will be no obvious impact - just some extra tests in among the ones that already exist.
I hope my simple questions highlight a lack of understand on how this works and
how we maintain it and use it once integrated.
I intend to continue to work and maintain this for the foreseeable future. I would hope as this beds in that other Promela/SPIN users out there will also get more involved over time.
It is expected that Promela models will be as static as the corresponding APIs. They capture the specified behaviour of API calls, not their detailed implementation.
The python scripts should also be fairly stable, although I can see some tweaking for a while to improve workflow issues that might arise.
There are some extra python libraries that are required over and above what is currently specified in rtems-central/requirements.txt
Thanks
Chris
Thanks,
Andrew
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20220701/b11a2901/attachment-0001.htm>
More information about the devel
mailing list