Waiting for mbufs

gregory.menke at gsfc.nasa.gov gregory.menke at gsfc.nasa.gov
Wed Mar 12 22:50:29 UTC 2003


Chris Johns writes:
 > gregory.menke at gsfc.nasa.gov wrote:
 > 
 > > Is there a task-id sensitivity built into the stack?  When I had 2
 > > daemons per unit, the mbuf problem didn't occur.  I'm kind of
 > > wondering if it could be some kind of effect related to
 > > event_receive/semaphore timing.  Calling
 > > rtems_bsdnet_show_mbuf_stats() shows 400+ mbufs free.  Any hints are
 > > appreciated.
 > 
 > How many clusters do you have and how many are pending in the drivers (I am 
 > assuming the device supports scatter/gather operation) ?
 > 
 > The mbuf stat command lists the clusters as well as the numbre free. Clusters 
 > are different to mbufs.

This is a dribble from just after the mbuf messages start.  The only
difference between these counts and when the system is working OK is
free = 447, header = 0 (the waits/drains are lower by some # of
packets, but still = each other).


************ MBUF STATISTICS ************
mbufs: 512    clusters:  64    free:   0
drops:   0       waits:  75  drains:  75
      free:443           data:67          header:2           socket:0
       pcb:0           rtable:0           htable:0           atable:0
    soname:0           soopts:0           ftable:0           rights:0
    ifaddr:0          control:0          oobdata:0


Press enter for counts
Still waiting for mbuf cluster.
Still waiting for mbuf cluster.
Still waiting for mbuf cluster.



 > If you run short or are running low, the two task design will allow a driver to 
 > block, and let the other run. The other running receive task can receive data, 
 > pass to the stack some clusters which are freed, releasing the blocked recieve 
 > task. In a single task design the second card cannot service its receive chain 
 > so you block.

Sure, but I think I have plenty free here- and I'm only pinging one of
the interfaces once a second or so.  The same load (greater for that
matter, simulataneous pings on both interfaces) works fine if each
unit has its own receive task.

 > 
 > You could use a NO_WAIT design with scatter/gather hardware where low cluster 
 > counts result in less receive buffers queued in hardware. If you run out the MAC 
 > should report missed receives but what is received is passed to the stack in a 
 > timely manner.

The driver can deliver a maximum of 32 packets at once, given its
design- I'm nowhere near that.

Gregm




More information about the users mailing list