minGW-get, gcc-testing, dpkg-testing, 32-bit vs. 64-bit hosts, and autotools versioning

Rempel, Cynthia cynt6007 at vandals.uidaho.edu
Fri Mar 22 19:50:01 UTC 2013


I'm not really clear on what the argument here is about.  If we want prepackaged software for MSYS/MinGW it would seem reasonable to explore the minGW packaging software.

One minGW packaging software I came across was the minGW-get for getting the package, further research is needed to determine what packaging software would be used to package for minGW-get.

The directions for testing the gcc-toolchain can be found http://gcc.gnu.org/simtest-howto.html in the testing section. (Of course, you wouldn't check out the sources, but instead follow the directions for building the toolchain).  I would suggest for targets, starting with popular ones, such as ARM, and then testing a challenged target, such as m68k to identify errors). After that, build the package and run your equivalent of:

dpkg -i package.deb 

On a newly installed OS in a VM -- it's a really easy smoke test.

32-bit Windows binaries run on 64-bit Windows systems, so if there is only enough personnel to support only one host, I'd suggest the 32-bit Windows hast is worth considering.

Although, when I replaced all instances of Autoconf-2.69 with Autoconf-2.62 and replaced all instances of Automake-1.12.6 with 1.11.2 rtems bootstrapped okay on Ubuntu 12.10, without special auto-tools, perhaps making that change in rtems.git would be worth exploring?

Perhaps someone could explore the XML used to package the old autotools and buildtools on minGW-get and see if it's not too much trouble to update the versions and targets, then write directions so someone in our user-community could maintain the packages as well?

Hope this helps!
Cynthia Rempel
From: rtems-devel-bounces at rtems.org [rtems-devel-bounces at rtems.org] on behalf of Ralf Corsepius [ralf.corsepius at rtems.org]
Sent: Thursday, March 21, 2013 8:35 AM
To: Gedare Bloom; rtems-devel at rtems.org
Subject: Re: Autoconf 2.69 and mingw

On 03/21/2013 03:36 PM, Gedare Bloom wrote:
> On Thu, Mar 21, 2013 at 10:13 AM, Ralf Corsepius
> <ralf.corsepius at rtems.org> wrote:
>> On 03/21/2013 02:43 PM, Thomas Dörfler wrote:
>>> Hi,
>>> I don't know about mingw auto* packaging, but for me this is another
>>> indication that we should in the future leave the path of basing our
>>> tools on the host packaging system.
>> This is impossible for scripted languages/tools, such as the autotools when
>> canadian cross building.
> Why is it impossible?

You are guessing on a target-host's features by running a script on the

In this case: You are running scripts on Linux and using the values
returned interpreting them as being valid on Windows.

This works as long as host and build-host are similar enough. As this is
not always possible, it requires some amounts of wild-guessing and

>>> Looking at this auto* version issue, the discussion about libraries not
>>> being present on a certain host etc, we should move to provide our tools
>>> as packaging-independent tarballs.
>> The problem with mingw is there not being any viable upstream distribution,
>> which can be used as a basis for building.
>> That said, their are 2 competing mingw system: mingw32 and mingw32-w64. The
>> former is pretty much dead, while the latter is actively maintained and
>> moving forward. Also, mingw32-w64-i686 is supposed to be compatible with
>> mingw32.
>> That said, the origin of mingw issues typically is the user, who is not able
>> to setup the infrastructure underneath.
>> What lacks for mingw is an rtems toolchain installer, nothing else and
>> nothing less.
> What would such an installer require?

Basically, somebody would have to write and test one.

Primary problem underneath is the problem of integrating "non-supplied"
binaries. As there are no real mingw distros, in worst case, one would
end up building the whole infrastructure from scratch, ie. to implement
a mingw-distro.

>>> This means:
>>> - building GCC and tools with statically linked libraries (no more
>>> dependency on DLLs except libc)
>> This is a very silly and naive idea, one I am used to be confronted with by
>> inexperienced new-comers. Makes me wonder why it comes from you.
> I don't understand---what is wrong with a static-linked toolchain?

statically-linked binaries are a nightmare, security risk, and cause of
bloat. In some cases, it's not even possible to link statically.

In short, there are reasons why all major Linux distros have banned them
and are fighting them.

>>> What I would expect from this would be:
>>> - have ONE tarball fit all Linux flavours (and for each major host family)
>> Glibc alone, is moving at a pace this is hardly possible.
> What does that have to do with anything?

A lot, ...

> We should have a
> fixed/working version of the library that is tested and used by
> many... Right now we rely on the upstream distro to provide it through
> package dependencies in RPMs, and users who do not rely on the RPM
> infrastructure will be stuck with whatever host tool they use and do
> not know what everyone else does. Would it not be better for us to
> specify a version of e.g. glibc, gcc, binutils, for the host, and
> newlib, gcc, binutils for the target?

No. This is non-applicable. Native tools and libc are none of RTEMS
business - This is what system-integration is about and what packageing
tools take care about.

This "link everything statically" is naive Windows-newbie school.

>>> - have repeatable results when compiling even on different hosts
>> Well, yes, all bugs would be hard wired.
> This is much better than bugs showing up only under random host/build
> combinations!
Are you aware of any?

rtems-devel mailing list
rtems-devel at rtems.org

More information about the devel mailing list