[PATCH 2/2] rtems: Add more clock tick functions

Pavel Pisa ppisa4lists at pikron.com
Sun Aug 24 09:42:53 UTC 2014


Hello Sebastian,

On Sunday 24 of August 2014 10:54:15 Sebastian Huber wrote:
> On 08/22/2014 10:14 PM, Pavel Pisa wrote:
> >>> +RTEMS_INLINE_ROUTINE bool rtems_clock_ticks_before(
> >>>
> >>> > >+  rtems_interval ticks
> >>> > >+)
> >>> > >+{
> >>> > >+  return ( (int32_t) ticks - (int32_t) _Watchdog_Ticks_since_boot )
> >>> > > > 0;
> >> >
> >> >Why not just return _Watchdog_Ticks_since_boot < ticks;
>
> Yes, this doesn't work if the counter overflows.
>
> > For sure not, to have correctly working overflow arithmetics
> > it is required to use subtraction of unsigned types and then
> > to limit result to signed type of same size.
> >
> > Overflow of subtraction of signed types is undefined according
> > to the C standard. The implementation causing exception is correct
> > but seldom used (can be enabled for MIPS on GCC). But comparison
> > is often optimized. So for example next code
>
> Conversion of too large unsigned integers to singed integers is also
> undefined.  I assume two's complement arithmetic here.

I do not remember such uncertainty from C99 standard reading.
Should be checked. What I am sure about is, that C standard
defines behavior equivalent to 2'complement arithmetics
for shifts and overflows for unsigned types and conversion
from/to signed has take that into account somehow even if
signed representation is not 2'complement. And we know that
on all RTEMS target signed is 2'complement. So standard
declares signed as unsigned as safe.

> > int fnc(int32_t x)
> > {
> >    if ((x + 0x7fffffff) < -0x10000000) {
> >      return 1;
> >    } else {
> >      return 0;
> >    }
> > }
> >
> > can be legally optimized to
> >
> > int fnc(int32_t x)
> > {
> >    return 0;
> > }
> >
> > because for any valid x number <-0x80000000;0x7fffffff>
> > arithmetic value of the sum cannot be smaller than -1.
> > And this kind of optimization is seen in reality.
> >
> > So even Sebastian's above code which tries to prevent
> > overflow case can be misoptimized and broken.
> >
> > Correct is
> >
> >    return ( (int32_t) ( (uint32_t) ticks - (uint32_t)
> > _Watchdog_Ticks_since_boot ) ) > 0;
>
> This version works also with two's complement arithmetic.  I don't think
> the compiler can optimize my version in the same way as your example,
> since _Watchdog_Ticks_since_boot is a global volatile variable.  Linux
> uses the same approach for time_before().

But your code subtact int32_t values and it is not safe even if it
is volatile. I.e. if volatile value is read to register then it is
processed as regular value for given occurrence in the expression.
I.e. for comparison with constant (not so probable for time) it can behave
exactly as I have described. Even worse if compared to non constant
((in32_t)start_time + 1000) because then compiler expect that this
subexpression never overflows and it does not compare final substraction
for carry but for signed less and equal.

There has been such faults in Linux kernel which demonstrated when
GCC started to narrow safety margin between common expectations
and actual standard requirements to generate yet another drop faster
code. Please, use unsigned for overflow arithmetic. Referenced
Linux macro uses unsigned as well and even with compile time
error generation if input types are not unsigned long.
  typecheck(unsigned long, a)
is harder than
  (unsigned long)a
to force programmers to use only unsigned types for overflow arithmetic.

#define time_after(a,b)         \
        (typecheck(unsigned long, a) && \
         typecheck(unsigned long, b) && \
         ((long)((b) - (a)) < 0))

Generally it is even better to define all types which are expected
to be used with cyclic arithmetic as unsigned and then have
another corresponding type of same size for differences.
That way you can keep in sync correct sizes and you do not
change type from signed to unsigned and vice versa except
as result of addition unsigned + signed -> unsigned and
unsigned - unsigned -> signed which is very well defined except
for intermediate result signed size which is in the fact one
bit larger than unsigned types until assigned to variable
or limited by explicit type conversion.

I point to my code because I know it best

I try to keep and distinguish in my sources two pairs of types.
The first
  typedef unsigned long ul_mstime_t;  
  typedef signed long ul_msdiff_t;
for time in known units. Little unfortunate milliseconds
based which can be not precise for now.

And internal type
  typedef ul_mstime_t ul_htim_time_t;
  typedef ul_msdiff_t ul_htim_diff_t;
which can use tick or other scaled representation on some 
targets.

http://sourceforge.net/p/ulan/ulut/ci/master/tree/ulut/ul_htimdefs.h

If explicit well defined types not used than it is probably
more safe to use my complex macro.
It has one assumption above standard declared one
and it is that there is not type which sizeof is same as wider
type but some bits of wider type are unused in narrower type.
Other than that (i.e. sizeof(char) == sizeof(long)) should not
make problems.

Best wishes,

              Pavel


More information about the devel mailing list