Problem report: Struct aliasing problem causes Thread_Ready_Chain corruption in 18.104.22.168
peerst at gmail.com
Wed Nov 22 10:28:04 UTC 2006
The general question of strict aliasing IMHO is divided in two parts:
-fstrict-aliasing is *not* a style option but and optimization
option. It is listed in
the "Optimize Options" chapter of the gcc manual.
2. Compiler portability:
If someone has to use a C-compiler that always assumes strict-aliasing
and has no switch to turn it off (however unlikely the existence
of this case)
So mainly its an optimization issue. Lets look at it from this perspective:
There are certain manual optimizations often seen in operating system kernels
and device drivers that do not adhere to the strict-aliasing assumption e.g.:
* Putting chain pointers at certain positions in structs casting
them to the general
type to use generic chain handling routines.
* Other "inheriting" pointer casts and union punning. Trouble with
these is that
the compiler does not know anything about what type "inherits" from which.
* Clever tricks like the 3-pointer chain header to save memory while keeping
the chain handling code free from special cases at the ends of chains.
The gains of these coding techniques stands agains possibly better
if the compiler can really assume strict-aliasing.
So the questions now get:
a.) Can we have both optimizations at the same time?
b.) What gets us more optimzation gains, the coding techniques or
the better compiler
Just as a expample. When I need generic chaining for "objects" I
usually use something along this:
struct node *next;
struct node *prev;
With a head element where obj == NULL.
What I get is somehow clean code (except the void * of course) that adheres
to strict aliasing.
What I pay is:
* One more indirection getting from a node to the object.
* Having to check for obj == NULL when getting to the object (usually
not much of a problem since it is done implicitly while traversing chains
* One more pointer per node.
* If I need to get from object to node an additional pointer is
needed per object.
So would the gains merit the costs for an embedded realtiem operating system?
And if yes how would the best transition path be?
>From the point of my client (thousands of units in an industrial
for manufacturing some running 24*7) it would be definitly not switching on the
optimization and having the system break in subtle ways in the field.
In order to avoid loosing all optimizations and have some transition path
and even getting the cake and eat it a solution could be:
Sort the source files into three piles:
1. Absolutely strict-aliasing clean even no "inheritance"
2. Breaking some strict-aliasing rules but believed to work around
the compiler issues.
3. Proven to need -fno-strict-aliasing for working correctly.
Have the user choose how much risk she would take and from this
deciding whether to set -fno-strict-aliasing on case 2 or not.
Best would be if inlining is taken into account automatically but
could also be taken into account manually.
What would we get from this:
* Best mix of manual vs. compliler optimizations without jeopardizing
* A smooth file by file transition path to stricter code.
So if this looks nice we could look into how to achive this:
* Does gcc help us (declaring aliasing assumptions in the code)?
* Can the makesystem handle this? Is it feasible to add a facility
to set optimization flags/warnings per file or at least group of files?
So what do you think about this?
More information about the users