Proposal for hardware configuration dependent performance limits

Chris Johns chrisj at rtems.org
Wed Nov 25 20:37:43 UTC 2020


On 23/11/20 8:14 pm, Sebastian Huber wrote:
> On 22/11/2020 22:45, Chris Johns wrote:
> 
>>>>>>> My point is that we need a key reported by the BSP and then some performance
>>>>>>> limits which can be found by arch/bsp/key to check if there are performance
>>>>>>> regressions.
>>>>>> I am missing the place where the performance limits are held. Do the tests
>>>>>> report timing values and the checks against the limits happen on a host?
>>>>> Yes, this is what I proposed.
>>>> Thanks and sorry for not picking up on this before now. It makes sense to do it
>>>> this way.
>>>>
>>> I chimed in on the idea of not using a hash, because of the opaqueness
>>> of the specification and difficulty to derive what should be
>>> reasonable performance based on small configuration changes from a
>>> standard set. In that case, we do punt some responsibility to the end
>>> user to start from a configuration with a known hash and performance
>>> bounds before defining their own. Otherwise, the best they can do is
>>> like what we do: run it, record the measurements, and use those as the
>>> bounds moving forward.
>>>
>>> When a user sends us a report saying my configuration
>>> a/lwjeVQ:H#TIHFOAH doesn't match the performance of
>>> z./hleg.khEHWIWEHFWHFE then we can have this conversation again. :)
>> If the user is basing their figures on a set of results we publish would
>> providing a description in the YAML be sufficient? This moves the burden of
>> maintenance from being internal to RTEMS to outside. And I am fine if there are
>> mandatory informational fields.
> 
> In the proposal the performance limits are optional in the specification. The
> specifications only enables you to check limits in test runs. We should probably
> not integrate limits for custom hardware in the RTEMS Project. We could add
> performance limits for some evaluation boards or simulators which we use for
> regular test runs to notice performance regressions. The configuration hashes
> included in the RTEMS Project need to be documented (in the BSP build item) to
> have a reverse mapping, e.g. XrY7u+Ae7tCTyyK7j1rNww== is board X revision Y
> running at 123MHz with SMP enabled, etc. Users which want to monitor performance
> requirements have to use also this approach:
> 
> 1. You select a board to use for long term performance tests.
> 
> 2. You define a set of configurations you want to test.
> 
> 3. You do an initial run of the test suite for each configuration. The RTEMS
> Tester provides you with a machine readable output (test data) of the test run
> with the raw test output per test executable and some meta information (TODO).
> 
> 4. A tool reads the  test data and the RTEMS specification and updates the
> specification with the performance limits obtained from the test run (maybe with
> some simple transformation, for example increase maximum by 10% and round to
> microseconds).
> 
> 5. You review the performance limits and then commit them to your private branch.
> 
> 6. Later you run the tests with a new RTEMS commit, get the performance values,
> compare them against the limits stored in the specification, and generate a report.
> 
> Maybe a configuration option for the RTEMS Tester should be added which allows
> you to set the performance hash and ignore the hash provided by the test output.
> This could be used to compare a custom board with values obtain from an
> evaluation board.

Why not let the tester take an alternative set of values for the same hash to
override the "standard" set?

Chris

ps: sorry I forgot to send this.


More information about the devel mailing list