RFC: Value of New Section on Tools Build Time Expectations

Joel Sherrill joel at rtems.org
Mon Oct 22 14:18:27 UTC 2018


On Sun, Oct 21, 2018 at 6:59 PM Chris Johns <chrisj at rtems.org> wrote:

> On 22/10/2018 09:11, Joel Sherrill wrote:
> > On Sun, Oct 21, 2018 at 2:16 PM Christian Mauderer <list at c-mauderer.de
> > <mailto:list at c-mauderer.de>> wrote:
> >     Am 21.10.18 um 19:07 schrieb Joel Sherrill:
> >     > Hi
> >     >
> >     > I am in the middle of reconfiguring my old (~2013 i7) laptop as an
> email
> >     > and remote workstation for home. Between helping students and
> doing two
> >     > Kick Starts in the past 6 weeks, VM configuration, disk space
> >     > expectations, and time required to build the tools seem to be a
> topic
> >     > that needs to be addressed. Under-configured VMs don't finish
> building
> >     > or take a LONG time.
>
> My only concern with VMs is promoting the idea of needing a VM for RTEMS
> development. A lot of work has gone into making the tools native across a
> wide
> number of hosts and native tools is the best solution for a user. I hope we
> encourage and teach using native tools before a VM.
>

I am sorry if it seemed the emphasis was on VMs.  My intent was to include
build times for sparc-rtems5 with the source pre-downloaded on a variety
of host environments and computer levels. The time varies a lot.

Yes there would be some VM advice but it would be secondary to the idea
that if you build on a Pi or an i3 with 5200 RPM laptop drive, expect it to
take
a long time.  Plus even on a fast machine, all I can say is that Cygwin is
slow.
I can't give an estimate.


> I have used native RTEMS Windows and Mac tools for development. It can mean
> learning some different work flows but in the end it is all rather boringly
> similar. The most difficult bit on Windows is debugging and the solution
> tends
> to be remote tcp to a GDB server else where. Who else has used Windows or
> Mac
> for development?
>

I did list including times for MSYS2 and Cygwin in my list. I don't have a
Mac to
add that.

>
> >     > I am proposing that I gather performance and configuraton notes for
> >     > building SPARC tools after downloading source on a few
> configurations:
>
> What about a link to the builds mailing list archive with something about
> the
> values to look for? Could the host memory and CPU type be added to the
> email
> reports?
>

That won't help the people with problems because anyone who posts to that
list has (1) a fast machine and (2) has tuned it.  I used 8 and 12 core
machines
with SSDs to report those from. I doubt they are representative of what a
GCI student uses.

>
> >     >
> >     > + Laptop 2013 i7: Centos on VM
> >     > + Laptop 2013 i7: MSYS2
> >     > + Laptop 2013 i7: Cygwin
>
> Are there any cygwin build results posted to builds?
>

I can include that in my next build sweep. Part of my plan was just to
build across
a lot and gather the same information. Report it so people could look at a
table
with some advice and know what to expect.

I mention my 2013 i7 because even though it is old, it really is not that
different
in performance from a new low-mid end laptop.

>
> >     > + Laptop 2017 i7: Same three
> >     > + 8 core Xeon Centos native
> >     > + 12 core i7 Fedora native
>
> I would prefer we did not gathering and publish in documentation
> information
> that is hard to be consistent, could be misleading and is of often out of
> date
> as soon as it is published. I remember removing some stuff like this when I
> moved the docs to Sphinx as the data was over ten years old. I cannot find
> it
> now in the old doco.
>

OK. I will see if I can generalize and maybe make a blog post.


>
> I am fine with the amount of disk space needed to build a tool set and
> then a
> more general comment that the more cores, memory and fast disks you use the
> faster the build will be. My Windows builds are on stripped disks.
>

My point exactly. The core developers build on machines that are not
representative
of the average user.

>
> For Windows you can document the POSIX layer for the shell etc adds
> overhead
> plus virus checking slows the build down so the build directory should be
> tagged
> as a directory not to check. I have no idea how cygwin and Windows Defender
> interact but I suspect it will slow things down by a lot and excluding it
> would
> help.
>
> On Windows does the virus scanner scan the VM disk file in real-time or
> are the
> VM's smart enough to have that file excluded?
>

I don't know. I have seen us have to disable virus scanners on some
computers
here at OAR but it tends to be on the Windows VMs themselves.

>
> >     > One the 2017 7th generation i7, differences in VM configuration can
> >     > result in the build time for sparc tools almost tripling.
> >     >
> >     > Does this sound useful or confusing? I know it is potentially
> somewhat
> >     > volatile information. But my old 3rd generation i7 CPU benchmark
> results
> >     > are comparable to an i5 that is much newer. Plus my old i7 has an
> SSD
> >     > which many newer i3/i5 laptops do not.
> >     >
> >     > Feedback appreciated.
> >     >
> >     > --joel
> >     >
> >
> >     Hello Joel,
> >
> >     in my experience, the biggest difference in the build time is the
> number
> >     of cores (in a VM or on a real machine). The processor generation
> didn't
> >     seem to have that much influence. But I never measured exact numbers.
> >
> > I only mention the processor generation because we don't tend to have i3
> or i5
> > CPUs available but students and users do. My 6-year old i7 benchmarks
> like a
> > newer i5. But often i5's don't come with SSDs so they can suffer even
> more.
> >
> > Number of cores and RAM are the two big factors. As is making sure you
> have
> > enough disk space allocated to avoid turning the entire thing in an
> exercise in
> > frustration.
>
> Agreed, the RSB has been updated recently to report usage.
>

And this helps. I just want to give advice based on that before someone
creates
a VM or partitions a disk.


>
> >     It might would be a good idea to add some rough numbers somewhere
> (maybe
> >     in the RSB-manual) so that a new user knows that he has to expect for
>
> Hmm, User manual instead?
>

I think it should be there. I would like the RSB manual to have "internals"
or
"developer" in the title. Using it should not be in it.

>
> I am happy for general guide lines or instructions on improving the
> experience
> for new users with new systems, for example with the latest patch I pushed
> over
> the weekend running a set builder command with `--dry-run` will let you
> know if
> the Python libraries are installed before anything is built. I am not
> convinced
> about the cost/benefit for any specific detail such as build times and host
> processor types.
>

You haven't spend a day helping a room full of people try to build the tools
and finding out that many fail due to lack of disk space or take forever due
to underconfigured VMs. I am not saying we should recommend VMs, just
that we should give admit people use them and give advice.

Sometimes I have a 50+% fail rate and end up with people resizing disks
or reloading VMs.


>
> Lets not forget building the tools should be once for a project and not
> something you do each week.
>

I agree. But it is the first thing you do and that's the first impression.

As the old saying goes, first impressions count.


>
> >     example roughly 4 to 6 hours on a single core or about 1/2 to 3/4 of
> an
> >     hour on a 8 core Linux system. It might could be interesting to have
> >     some rough numbers for other commonly used systems too (like MSYS2,
> >     Cygwin, FreeBSD or MacOS).
>
> The builds mailing list has real values ...
>
>  https://lists.rtems.org/pipermail/build/
>
> For Windows I have ...
>
>  https://lists.rtems.org/pipermail/build/2018-October/001164.html
>
> I stopped the build because I am fixing some more places where long paths
> are
> being used which is why the arm build broke ...
>
>   https://lists.rtems.org/pipermail/build/2018-October/001177.html
>
> > My 7th generation i7 (Dell last fall from last fall) is ~35 minutes for
> SPARC as
> > I tune my VM.
>
> A fast i7 macbook pro (PCIe SSD, 32G RAM, APFS) and native tools is around
> 5min
> for a bfin. The Mac posts to builds are from a Mac Mini with less
> performance ...
>
>   https://lists.rtems.org/pipermail/build/2018-October/001168.html
>
> and the bfin is 17mins. I use the bfin to test the RSB cause it is fast to
> build.
>

Good for smoke tests. Yet all our Getting Started is for sparc and that's a
bit more.
Looks like an hour from the same build run:

https://lists.rtems.org/pipermail/build/2018-October/001123.html

That's ~2x  a Centos VM on my laptop. So it varies a lot.

We also had someone on the gci@ mailing list who seemed to take days for
the build to complete.


> > A student in a Kick Start with the same laptop turned that into
> > a 90 minute build by having 1 core and less RAM. So even with a fast
> machine,
> > the guidance on the VM is important.
>
> How about "Give it everything you have!" and "Don't try and play video
> game!" :)
>

+1

>
> > Ignoring those who pick tiny virtual HD sizes and then can't even
> complete the
> > build no matter how long they wait.
>
> We should document this. I have had real problems with VirtualBox and
> sharing
> host disks. Last time I tried about 12 months ago it did not work.
>

AFAIK you can't successfully build on a mounted share drive. That's why it
is critical
to allocate enough disk space to the native filesystem for the virtual OS.


>
> >     I don't think that it is useful to compare processor generations.
> That
> >     would be an information that would have to be updated on a regular
> basis
> >     to have any use. I would only add some (few) examples.
> >
> > The CPU generation wasn't the point. Just that the older one is slower.
> Newer
> > generation ones are often only 20% faster than the old one. Just a
> reference point. >
> >     If you find any big influences beneath number of cores (you
> mentioned VM
> >     settings), it might would be worth adding a general section with tips
> >     for speeding up the build process.
> >
> > Not failing is the biggest one. I recommend downloading all source first
> since that
> > sometimes fails, doublechecking host packages, and Chris and I moved gdb
> before
>
> It is what happens when we spend a couple of days commuting from Oakland to
> Haywood in the Bay area. :)
>
> > gcc/newlib in the tools bset so users would fail on that early rather
> than last.
>
> And the latest patch has code to find Python.h and libpython<M><m>.* ..
>
>
> https://git.rtems.org/rtems-source-builder/tree/source-builder/config/gdb-common-1.cfg#n70
>
> > That's about it beyond VM tuning and expectations. If you have an i3
> with a
> > 5200RPM slow laptop drive, it is going to take a while. We say 30-45
> minutes
> > and we all have nice computers with tuned VMs.
> >
> > And Windows builds are WAY slower and I can't even give you an estimate
> at
> > this point. I just walk away.
>
> Building the tools is slower but you can get the overhead to be just the
> POSIX
> Cygwin/MSYS overhead and not much more. A build of libbsd with native
> Windows
> tools should be fast and my experience it is. To me this is more important
> than
> the tools build time and we should not lose sight of this. My hope is users
> spend more time building applications than tools.
>

Me too but they have to finish the tools. :)


>
> > So this was just "here's what we know about what to expect and what you
> can
> > do to help".
>
> Seem like a good idea.
>

That's all I was trying to capture. Build times seem to vary by a factor of
10 between
core developers and users. Especially students with lower end computers. We
want
folks to succeed.

>
> Chris
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20181022/3cc1dc2c/attachment-0002.html>


More information about the devel mailing list