Autoconf 2.69 and mingw

Ralf Corsepius ralf.corsepius at
Thu Mar 21 15:35:03 UTC 2013

On 03/21/2013 03:36 PM, Gedare Bloom wrote:
> On Thu, Mar 21, 2013 at 10:13 AM, Ralf Corsepius
> <ralf.corsepius at> wrote:
>> On 03/21/2013 02:43 PM, Thomas Dörfler wrote:
>>> Hi,
>>> I don't know about mingw auto* packaging, but for me this is another
>>> indication that we should in the future leave the path of basing our
>>> tools on the host packaging system.
>> This is impossible for scripted languages/tools, such as the autotools when
>> canadian cross building.
> Why is it impossible?

You are guessing on a target-host's features by running a script on the 

In this case: You are running scripts on Linux and using the values 
returned interpreting them as being valid on Windows.

This works as long as host and build-host are similar enough. As this is 
not always possible, it requires some amounts of wild-guessing and 

>>> Looking at this auto* version issue, the discussion about libraries not
>>> being present on a certain host etc, we should move to provide our tools
>>> as packaging-independent tarballs.
>> The problem with mingw is there not being any viable upstream distribution,
>> which can be used as a basis for building.
>> That said, their are 2 competing mingw system: mingw32 and mingw32-w64. The
>> former is pretty much dead, while the latter is actively maintained and
>> moving forward. Also, mingw32-w64-i686 is supposed to be compatible with
>> mingw32.
>> That said, the origin of mingw issues typically is the user, who is not able
>> to setup the infrastructure underneath.
>> What lacks for mingw is an rtems toolchain installer, nothing else and
>> nothing less.
> What would such an installer require?

Basically, somebody would have to write and test one.

Primary problem underneath is the problem of integrating "non-supplied" 
binaries. As there are no real mingw distros, in worst case, one would 
end up building the whole infrastructure from scratch, ie. to implement 
a mingw-distro.

>>> This means:
>>> - building GCC and tools with statically linked libraries (no more
>>> dependency on DLLs except libc)
>> This is a very silly and naive idea, one I am used to be confronted with by
>> inexperienced new-comers. Makes me wonder why it comes from you.
> I don't understand---what is wrong with a static-linked toolchain?

statically-linked binaries are a nightmare, security risk, and cause of 
bloat. In some cases, it's not even possible to link statically.

In short, there are reasons why all major Linux distros have banned them 
and are fighting them.

>>> What I would expect from this would be:
>>> - have ONE tarball fit all Linux flavours (and for each major host family)
>> Glibc alone, is moving at a pace this is hardly possible.
> What does that have to do with anything?

A lot, ...

> We should have a
> fixed/working version of the library that is tested and used by
> many... Right now we rely on the upstream distro to provide it through
> package dependencies in RPMs, and users who do not rely on the RPM
> infrastructure will be stuck with whatever host tool they use and do
> not know what everyone else does. Would it not be better for us to
> specify a version of e.g. glibc, gcc, binutils, for the host, and
> newlib, gcc, binutils for the target?

No. This is non-applicable. Native tools and libc are none of RTEMS 
business - This is what system-integration is about and what packageing 
tools take care about.

This "link everything statically" is naive Windows-newbie school.

>>> - have repeatable results when compiling even on different hosts
>> Well, yes, all bugs would be hard wired.
> This is much better than bugs showing up only under random host/build
> combinations!
Are you aware of any?

More information about the devel mailing list