zero_copy in RTEMS

Till Straumann strauman at
Wed Feb 10 14:05:56 UTC 2010


The NFS client (libfs/src/nfsclient/) uses a similar trick to avoid
at least one copy step (could probably be further optimized).

Normally, the code would have to do two copy operations:

user code calls write();
NFS XDR-encodes user data into memory buffer, calls 'sendto'
TCP/IP stack copies XDR memory buffer into mbufs

use code calls 'read()'
recvfrom copies from mbufs into XDR buffer
nfsclient XDR-decodes from XDR buffer into user buffer.

The nfsclient avoids one of each of these steps:

user code calls write();
NFS XDR-encodes into buffer, points mbuf to buffer and hands
mbuf to TCP/IP stack (sosend())

user code calls read();
NFS receives mbufs from TCP/IP stack (soreceive()), XDR-decodes
out of mbufs into user buffer.

Since XDR-encoding the 'write' payload is a trivial copy operation the
'write()' could be further improved by XDR-encoding header information
into mbufs and pointing a final mbuf to the user buffer, then handing the
chain to the TCP/IP stack.

Since the user always wants to 'read' into her own buffer
at least one copy step is always required when read()ing.

-- Till

Lu Chih Wen wrote:
> Hello all
> I have an idea about the implementation of zero-copy on RTEMS’s TCP/IP 
> stack.
> The major concept about zero-copy is decrease the copy time of 
> mbuf->m_data to application’s buffer,
> So I try to allocate mbuf chain in application and swap the appended 
> cluster buffer between protocol
> and application in socket layer. I have implemented in my test 
> platform and got a lot of increase of performance/
> Was anyone did this before? I hope someone can point out more detail 
> that I should aware of .
> RX sequence!
> MAC HW fill driver’s allocated mbuf->m_data(Cluster buffer ) -> TCP/IP 
> -> swap cluster buffer in socket layer à return to Application.
> TX sequence!
> Application fill the data in mbuf->m_data(Cluster buffer ) -> Swap 
> cluster buffer in socket layer -> TCP/IP -à MAC driver.
> Yours sincerely
> Rudolph.Lu
> ------------------------------------------------------------------------
> _______________________________________________
> rtems-users mailing list
> rtems-users at

More information about the users mailing list