memcpy performance

Joel Sherrill joel at
Tue Dec 9 13:44:40 UTC 1997

On Tue, 9 Dec 1997, Chris Johns wrote:

> Wondering what dependence RTEMS (3.6.0)  had on this so I checked the
> code:
>  $ fgrep memcpy `find . -name "*.inl" -print`
> ./exec/score/inline/coremsg.inl:#include <string.h>   /* needed for
> memcpy */
> ./exec/score/inline/coremsg.inl:  memcpy(destination, source, size);
> ./exec/score/macros/coremsg.inl:  memcpy( _destination, _source, _size)
> The actual routine:
> RTEMS_INLINE_ROUTINE void _CORE_message_queue_Copy_buffer (
>   void      *source,
>   void      *destination,
>   unsigned32 size
> )
> {
>   memcpy(destination, source, size);
> }
> So it looks like the message queue core uses byte by byte copies for
> messages.
> Is this a good thing considering the fast and large bus sizes of newer
> CPUs ?

I guess you already knew the answer to this rhetorical question. 
Byte-by-Byte memcpy implementatoins have stunk as long as I have been
doing embedded systems.  Even the lowly 80186 could do better. :)

The key is how to optimize this and what assumptions can you make about

Two approaches:

1.  Fix memcpy in newlib

2.  Don't use memcpy in RTEMS.

1. is more generally useful.  2. addresses the immediate problem at hand.

Before RTEMS had variable length messages, they were copied 32-bits at a

I don't think you can assume anything about the alignment of the message
buffer in "user space".  And since the _CORE_message_queue_Copy_buffer
routine is used to copy from from and to user space, it must account for
marginal alignment on both directions.

The best thing is probably to upgrade the newlib memcpy routine.  newlib
has a new maintainer and there is activity again on improving it. Some
CPUs have optimized memcpy routines already in newlib.  I will be happy to
pass this one on to them. 

Joel Sherrill                    Director of Research & Development
joel at                 On-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
   Support Available             (205) 722-9985

More information about the users mailing list