RFC: Value of New Section on Tools Build Time Expectations
chrisj at rtems.org
Sun Oct 21 23:59:27 UTC 2018
On 22/10/2018 09:11, Joel Sherrill wrote:
> On Sun, Oct 21, 2018 at 2:16 PM Christian Mauderer <list at c-mauderer.de
> <mailto:list at c-mauderer.de>> wrote:
> Am 21.10.18 um 19:07 schrieb Joel Sherrill:
> > Hi
> > I am in the middle of reconfiguring my old (~2013 i7) laptop as an email
> > and remote workstation for home. Between helping students and doing two
> > Kick Starts in the past 6 weeks, VM configuration, disk space
> > expectations, and time required to build the tools seem to be a topic
> > that needs to be addressed. Under-configured VMs don't finish building
> > or take a LONG time.
My only concern with VMs is promoting the idea of needing a VM for RTEMS
development. A lot of work has gone into making the tools native across a wide
number of hosts and native tools is the best solution for a user. I hope we
encourage and teach using native tools before a VM.
I have used native RTEMS Windows and Mac tools for development. It can mean
learning some different work flows but in the end it is all rather boringly
similar. The most difficult bit on Windows is debugging and the solution tends
to be remote tcp to a GDB server else where. Who else has used Windows or Mac
> > I am proposing that I gather performance and configuraton notes for
> > building SPARC tools after downloading source on a few configurations:
What about a link to the builds mailing list archive with something about the
values to look for? Could the host memory and CPU type be added to the email
> > + Laptop 2013 i7: Centos on VM
> > + Laptop 2013 i7: MSYS2
> > + Laptop 2013 i7: Cygwin
Are there any cygwin build results posted to builds?
> > + Laptop 2017 i7: Same three
> > + 8 core Xeon Centos native
> > + 12 core i7 Fedora native
I would prefer we did not gathering and publish in documentation information
that is hard to be consistent, could be misleading and is of often out of date
as soon as it is published. I remember removing some stuff like this when I
moved the docs to Sphinx as the data was over ten years old. I cannot find it
now in the old doco.
I am fine with the amount of disk space needed to build a tool set and then a
more general comment that the more cores, memory and fast disks you use the
faster the build will be. My Windows builds are on stripped disks.
For Windows you can document the POSIX layer for the shell etc adds overhead
plus virus checking slows the build down so the build directory should be tagged
as a directory not to check. I have no idea how cygwin and Windows Defender
interact but I suspect it will slow things down by a lot and excluding it would
On Windows does the virus scanner scan the VM disk file in real-time or are the
VM's smart enough to have that file excluded?
> > One the 2017 7th generation i7, differences in VM configuration can
> > result in the build time for sparc tools almost tripling.
> > Does this sound useful or confusing? I know it is potentially somewhat
> > volatile information. But my old 3rd generation i7 CPU benchmark results
> > are comparable to an i5 that is much newer. Plus my old i7 has an SSD
> > which many newer i3/i5 laptops do not.
> > Feedback appreciated.
> > --joel
> Hello Joel,
> in my experience, the biggest difference in the build time is the number
> of cores (in a VM or on a real machine). The processor generation didn't
> seem to have that much influence. But I never measured exact numbers.
> I only mention the processor generation because we don't tend to have i3 or i5
> CPUs available but students and users do. My 6-year old i7 benchmarks like a
> newer i5. But often i5's don't come with SSDs so they can suffer even more.
> Number of cores and RAM are the two big factors. As is making sure you have
> enough disk space allocated to avoid turning the entire thing in an exercise in
Agreed, the RSB has been updated recently to report usage.
> It might would be a good idea to add some rough numbers somewhere (maybe
> in the RSB-manual) so that a new user knows that he has to expect for
Hmm, User manual instead?
I am happy for general guide lines or instructions on improving the experience
for new users with new systems, for example with the latest patch I pushed over
the weekend running a set builder command with `--dry-run` will let you know if
the Python libraries are installed before anything is built. I am not convinced
about the cost/benefit for any specific detail such as build times and host
Lets not forget building the tools should be once for a project and not
something you do each week.
> example roughly 4 to 6 hours on a single core or about 1/2 to 3/4 of an
> hour on a 8 core Linux system. It might could be interesting to have
> some rough numbers for other commonly used systems too (like MSYS2,
> Cygwin, FreeBSD or MacOS).
The builds mailing list has real values ...
For Windows I have ...
I stopped the build because I am fixing some more places where long paths are
being used which is why the arm build broke ...
> My 7th generation i7 (Dell last fall from last fall) is ~35 minutes for SPARC as
> I tune my VM.
A fast i7 macbook pro (PCIe SSD, 32G RAM, APFS) and native tools is around 5min
for a bfin. The Mac posts to builds are from a Mac Mini with less performance ...
and the bfin is 17mins. I use the bfin to test the RSB cause it is fast to build.
> A student in a Kick Start with the same laptop turned that into
> a 90 minute build by having 1 core and less RAM. So even with a fast machine,
> the guidance on the VM is important.
How about "Give it everything you have!" and "Don't try and play video game!" :)
> Ignoring those who pick tiny virtual HD sizes and then can't even complete the
> build no matter how long they wait.
We should document this. I have had real problems with VirtualBox and sharing
host disks. Last time I tried about 12 months ago it did not work.
> I don't think that it is useful to compare processor generations. That
> would be an information that would have to be updated on a regular basis
> to have any use. I would only add some (few) examples.
> The CPU generation wasn't the point. Just that the older one is slower. Newer
> generation ones are often only 20% faster than the old one. Just a reference point. >
> If you find any big influences beneath number of cores (you mentioned VM
> settings), it might would be worth adding a general section with tips
> for speeding up the build process.
> Not failing is the biggest one. I recommend downloading all source first since that
> sometimes fails, doublechecking host packages, and Chris and I moved gdb before
It is what happens when we spend a couple of days commuting from Oakland to
Haywood in the Bay area. :)
> gcc/newlib in the tools bset so users would fail on that early rather than last.
And the latest patch has code to find Python.h and libpython<M><m>.* ..
> That's about it beyond VM tuning and expectations. If you have an i3 with a
> 5200RPM slow laptop drive, it is going to take a while. We say 30-45 minutes
> and we all have nice computers with tuned VMs.
> And Windows builds are WAY slower and I can't even give you an estimate at
> this point. I just walk away.
Building the tools is slower but you can get the overhead to be just the POSIX
Cygwin/MSYS overhead and not much more. A build of libbsd with native Windows
tools should be fast and my experience it is. To me this is more important than
the tools build time and we should not lose sight of this. My hope is users
spend more time building applications than tools.
> So this was just "here's what we know about what to expect and what you can
> do to help".
Seem like a good idea.
More information about the devel