Caches on G2 PowerPC/network problem

Till Straumann strauman at slac.stanford.edu
Sun Dec 14 22:11:21 UTC 2008


Leon.

Many systems support cache-snooping hardware
which takes care of cache coherency issues for you.
First you may want to check if your system/board
does support snooping and make sure the support
is enabled.

That said: if you must maintain cache coherency
in software then the standard procedure is:


send_packet:

  
1)  dcbf  (flush to memory + invalidate)
     entire buffer.
2) hand packet over to DMA (often
    a packet descriptor also needs to be
    flushed -- if the DMA engine passes
    status info back then the descriptor
    needs to be invalidated before reading
    back -- see below.

receive_packet:

1) allocate packet buffer
2) dcbi (invalidate) entire buffer;
3) hand buffer to DMA (above remark
    for descriptor also applies here).
4) upon reception pass buffer to user.

If you can use 'dcbi' then you can exchange
steps 2+3.

If you want to avoid using dcbi then you
can use dcbf instead (perf. penalty since
memory writes may occur). However, if
you do so then you MUST NOT exchange
steps 2+3, i.e., you must flush/invalidate
the buffer BEFORE handing to the
DMA engine - otherwise the dcbf
operation might overwrite newly
received data.

Note that not having 'dcbi' causes a
subtle problem:

some drivers' driver task scans the
descriptor rings until it encounters a
descriptor which is 'still owned' by
the chip/DMA (RX) or has not yet
been sent (TX). In this case, you'd
have to dcbi this descriptor before
checking/reading it again. I'm not sure
what 'dcbf' would do in the case where
- cache line is in memory and not dirty
- memory contents (descriptor) was changed
  by HW/DMA
- CPU executes dcbf
I don't remember if this scenario just invalidates
the line or if it writes the old contents back
or if it gives undefined results.

In any case, if you find that you have to
manage cache coherency in software then
you probably must ensure

 a) that descriptors are cache-line aligned
 b) RX buffers are cache-line aligned.

It would probably make sense to try to cache-align
TX buffers also -- probably you can simply make
sure that the mbuf and mbuf-cluster pools are
aligned.

For RX packets you can also use the following
strategy:

1) Allocate buffer - usually, RX buffers are allocated as mbuf clusters
    (1 cluster = 2k).
2) up-align data area (m->m_data) if necessary
3) downsize mbuf length by the amount used in 2 plus
    margin to protect end of buffer.

E.g.,

MGETHDR(m, M_DONTWAIT, MT_DATA);
MCLGET(m, M_DONTWAIT);

/* remember original data area */
orig_area = mtod(m , unsigned long);
/* compute number of bytes used up by up-aligning */
n_bytes = ((orig_area + ALIGNMENT-1) & ~ (ALIGNMENT-1)) - orig_area;

/* left as an exercise: reduce n_bytes so that flushing last
   'legal' line in buffer does not write past end of buffer
 */

/* adjust mbuf */
m->m_data += l;
m->m_len = m->m_pkthdr.len = MCLBYTES - n_bytes;


HTH
-- Till





Leon Pollak wrote:
> Hello.
>
> Gurus in PPC and networking, please, help!
>
> G2 manual states that usage of dcbi instruction (cache invalidate) is 
> forbidden. And they suggest to use dcbf (cache flush) instead.
>
> This raises the following possible problem.
> Consider the network driver which is going to receive some buffer at address 
> X. For this, when the DMA reported that data has arrived, the driver should 
> execute FLUSH instruction as they recommend.
> But flush, before invalidating, will check if the corresponding cache line was 
> not updated. And if yes, it will flush cache content back to the memory!
>
> This means, that if one 32byte (CACHE_ALIGNMENT) cache line contains data from 
> 2 buffers, the arrived data will be spoiled.
>
> I do not understand how to solve this, except by guarantying that each buffer 
> returned by MGETHDR(...) and MCLGET(...) is cache aligned.
>
> And how to do this, in turn?
>
> Many thanks ahead for a help.
>   




More information about the users mailing list