[PATCH 0/8] Add test suite to validate performance requirements

Chris Johns chrisj at rtems.org
Sun Nov 15 23:53:13 UTC 2020


On 14/11/20 11:13 pm, Sebastian Huber wrote:
> On 13/11/2020 20:21, Gedare Bloom wrote:
> 
>> I didn't really raise this in your other threads related to
>> performance, but how are we (RTEMS Project) defining performance
>> requirements? Are these simply the performance values we get by
>> running the tests on our release? Do we aim to hit certain targets and
>> reject patches if they don't? Or do we have to debate when a patch
>> does change a performance value? Are we going to need to run these
>> performance tests as part of regular testing before committing?
>>
>> I'd like to get some clarity on the idea of performance requirements
>> as a community burden before we bake them in to RTEMS.
> 
> Right now we have no systematic indication if there are performance regressions
> or improvements. Which influence have compiler updates, we don't know. I think
> for a supposed to be real-time operating system, this is not really good and we
> should step by step improve the situation. One way to do this is to add
> performance requirements that state that the runtime of a certain operation in a
> particular environment and system condition should less than or equal to a
> certain limit. This limit is obviously hardware dependent. For a start we can
> run the performance tests and record the data in the specification data. In a
> later test run we can check the values and notice if there are some changes in
> the wrong direction. The new test framework was specifically written to get this
> data easily. See for example (1.5 Test Suite - ClockTick):
> 
> https://ftp.rtems.org/pub/rtems/people/sebh/test-report.pdf
> 
> Not every target is suitable to get stable performance data (Qemu is really bad
> at this). I think the SIS (or other GDB simulators) would be a good candidate to
> do this on a regular basis.
> 
> What can we do with the data? It can give some aid to judge changes.
> 

Yes but we need to be cautious. Knowing what changes is good. How we use that
data is hard. For example compiler backends vary and evolve at different rates
so does a performance regression on an architecture effect updating the compiler
if others architectures benefit? There are many questions like this.

Chris


More information about the devel mailing list