Really need some help with RTEMS networking semaphores
gregory.menke at gsfc.nasa.gov
gregory.menke at gsfc.nasa.gov
Wed Oct 18 19:41:43 UTC 2006
Eric Norum writes:
> On Oct 18, 2006, at 12:45 PM, gregory.menke at gsfc.nasa.gov wrote:
>
> >>>
> >>> Under what conditions does the stack deadlock and what can drivers
> >>> do to
> >>> help prevent it from doing so?
> >>
> >> Running out of mbufs is never a good thing. In the UDP send case you
> >> might reduce the maximum length of the socket queue.
> >
> > Does that mean a too-long udp send queue can starve for mbufs &
> > deadlock
> > the stack?
>
> I suspect that this could happen, yes.
I had better results by making the cluster space = mbuf space * 6 and
reducing the mbuf space so all the allocations succeeded (now at 64k).
Using that approach seems to have stabilized the stack. Changing
SO_SNDBUF had no effect.
Could it be that I had too many mbufs and not enough cluster space so
that cluster allocations failed, blowing up the stack even though plenty
of mbufs were available?
I'm now moving into bandwidth tests. Interestingly, sendto() is
returning -1 w/ errno == ENOENT, but the packet was sent properly. So
far I haven't found what is generating ENOENT in the sendto tree.
> >>>
> >>> What should their relative sizings be?
> >>
> >> Depends on your application. Which type are you running out of?
> >> For my EPICS applications here I've got:
> >> 180*1024, /* MBUF space */
> >> 350*1024, /* MBUF cluster space */
> >
> >
> > How do I tell which I'm running out of?
>
> rtems_bsdnet_show_mbuf_stats ();
On a number of occasions I've seen the stack deadlock with all but 1 or
2 mbufs & clusters free. Perhaps there is a race condition that locks
the stack while the driver continues to free mbufs, leading to the mbuf
stats showing many free buffers.
> >
> > I've tried everything from 64k & 128k up to 256k & 256k, some sort of
> > problems in all cases. Could you give examples of how mbuf buffer
> > sizings relates to types of application?
>
> The only example I can give is what seems to be working here.
Does this mean there are no rules or even suggestions other than what
seems to work and what fits into available memory when sizing the mbufs
& clusters?
> >
>
> You're sure that you don't have half/full duplex problems?
Yes- full duplex all around.
Thanks,
Greg
More information about the users
mailing list