[PATCH] membench: Add memory benchmark programs
Gedare Bloom
gedare at rtems.org
Tue Jul 25 02:16:42 UTC 2023
On Mon, Jul 24, 2023 at 2:01 AM Sebastian Huber
<sebastian.huber at embedded-brains.de> wrote:
>
> On 21.07.23 09:43, Chris Johns wrote:
> > On 21/7/2023 3:28 pm, Sebastian Huber wrote:
> >> On 21.07.23 03:27, Chris Johns wrote:
> >>> On 21/7/2023 3:51 am, Sebastian Huber wrote:
> >>>> On 20.07.23 18:58, Gedare Bloom wrote:
> >>>>> On Thu, Jul 20, 2023 at 7:42 AM Sebastian Huber
> >>>>> <sebastian.huber at embedded-brains.de> wrote:
> >>>>>> These memory benchmark programs are not supposed to run. Instead, they
> >>>>>> can be analysed on the host system to measure the memory usage of
> >>>>>> features. See the membench module of rtems-central.
> >>>>>>
> >>>>> This needs some kind of documentation and probably a README inside of
> >>>>> membench with that information.
> >>>>
> >>>> Ok, I can add a README.md.
> >>>>
> >>>>>
> >>>>> This appears to be about benchmarking the program size (static memory
> >>>>> usage) only? If so, make that clear in the README / log note. I think
> >>>>> it's in the doxygen already so that's helpful.
> >>>>
> >>>> Yes, it measures only the static memory size required for certain operating
> >>>> system services. See 4.7 Memory Usage Benchmarks in:
> >>>>
> >>>> https://ftp.rtems.org/pub/rtems/people/sebh/rtems-6-sparc-gr740-uni-6-scf.pdf
> >>>
> >>> Should `static` be part of naming?
> >>
> >> Yes, good idea.
> >>
> >>>
> >>>>> What happens when the membench gets built, and then someone runs
> >>>>> $> rtems-test build/${ARCH}/${BSP}/testsuites
> >>>>>
> >>>>> Because I don't see anything that is filtering these executables.
> >>>>
> >>>> They are filtered out due to the *.norun.* pattern:>
> >>>> target: testsuites/membench/mem-scheduler-add-cpu.norun.exe
> >>>>
> >>>
> >>> Currently tests with `norun` assume the build fails if there is an issue with a
> >>> test. This is why we allow these tests and they are tagged `norun`.
> >>
> >> We already have a couple of norun tests in libtests. This filtering is simple
> >> and works fine why would you want to change it?
> >
> > I am not asking for that to change. After a build we will have a set of norun
> > tests and in that set are some that are be used for other purposes, eg memory
> > analysis, but that information is not available in the project. The norun could
> > be extended to be .norun.memstatic.exe and so the executables that form a
> > specific subset can be found and analysed.
>
> The modules used to analyze static memory benchmarks know how to find
> them. The pattern is currently f"{path}/mem-{module}-{name}.norun.exe".
> We could of course use also a different pattern or no pattern at all,
> since the specification knows exactly which executable is associated
> with which item.
>
> >
> > The tests have been self contained for a long time and I would like that to
> > continue. ELF notes has been discussed in the past however we do not yet support
> > that so we need to find other ways to handling things.
>
> The static memory benchmarks are not useful individually. You have to
> know the relationship between them to get the results. For example, what
> is the cost of using API call X in terms of static memory usage?
>
> >
> >>> Are they suppose to be checked or are they informational? Is something going to
> >>> be added to the project, for example in rtems-tools.git, to allow these tests to
> >>> be checked?
> >>
> >> Currently, they are just informational,
> >
> > I do not understand. What information, for what purpose and for whom?
>
> I would have a look at the section 4.7 Memory Usage Benchmarks in to get
> an idea:
>
> https://ftp.rtems.org/pub/rtems/people/sebh/rtems-6-sparc-gr740-uni-6-scf.pdf
>
> This is just one application. You could also use the static memory
> benchmarks in a CI runner to catch memory usage regressions.
>
We should have a use or plan for them. Adding the output as part of
build tests is a good start. Maybe we can tie them into the build
visualization project in the future.
> >
> >> All the stuff to analyze this and work with
> >> the specification is in rtems-central.git. If you think this needs to be
> >> changed, then I am happy to discuss this.
> >
There is a philosophical discussion to be had here, and a debate of
sorts. IIRC the original intent of rtems-central was to provide a
centralized workflow for developers and pre-qualification to be a
"repo of repos". Now, it contains a lot of core support code itself.
The current design creates a dependency loop among repositories, which
IMO should be avoided. I'm happy to hear other perspectives.
I would suggest promoting (maturing) out some of the code in
rtems-central into existing/new support repositories where
appropriate. This would be a good opportunity to start to pull out and
modularize pieces of rtems-central.git. If it makes more sense to add
a new top-level repo to support some of the content generation, then
so be it. I would find this easier to comprehend. I do see that this
may be a bit involved at present. The sphinxcontent.py generator
scripts should be modularized out in parallel.
Would it be hard to pull out the analysis logic from
https://git.rtems.org/rtems-central/tree/membench.py and migrate
support to rtems-tools.git (or other modular repositories)?
Gedare
> > Lets first understand the role these tests have. Adding them slows the build
> > down but that is OK if there is value in them for everyone.
>
> The are controlled by the build option BUILD_MEMBENCH which is set to
> false by default. So, by default it doesn't slow down the build.
>
> >
> >> My preference is still to get rid of
> >> all the separate repositories and move everything back to rtems.git.
> >
> > The qual work was separated for specific reasons. Those reasons are still valid.
> Maybe this needs to be reevaluated. The tool box in rtems-central is
> quite capable. It could be also used for example to create an RTEMS release.
>
> >
> >> What is the plan for the CI flows?
> >
> > I believe it is with Joel and Amar. I am waiting like everyone else.
>
>
> --
> embedded brains GmbH
> Herr Sebastian HUBER
> Dornierstr. 4
> 82178 Puchheim
> Germany
> email: sebastian.huber at embedded-brains.de
> phone: +49-89-18 94 741 - 16
> fax: +49-89-18 94 741 - 08
>
> Registergericht: Amtsgericht München
> Registernummer: HRB 157899
> Vertretungsberechtigte Geschäftsführer: Peter Rasmussen, Thomas Dörfler
> Unsere Datenschutzerklärung finden Sie hier:
> https://embedded-brains.de/datenschutzerklaerung/
> _______________________________________________
> devel mailing list
> devel at rtems.org
> http://lists.rtems.org/mailman/listinfo/devel
More information about the devel
mailing list