lib-bsd socket close issues

Matthew J Fletcher amimjf at gmail.com
Thu Apr 4 17:53:09 UTC 2019


Hi Sebastian

I used rtems_task_wake_after().


On Thu, 4 Apr 2019, 18:22 Sebastian Huber, <
sebastian.huber at embedded-brains.de> wrote:

> How do you wait. Is this a busy wait?
>
> ----- Matthew J Fletcher <amimjf at gmail.com> schrieb:
> > replying to myself.
> >
> > With a 1 second pause between socket() and close() and 512 sockets it
> will
> > still ENOBUFS,.. without calculating it properly thats easily 10 minutes
> > since the first socket was allocated,. that must be enough time to start
> > freeing the socket buffers internally.
> >
> >
> > On Thu, 4 Apr 2019 at 16:47, Matthew J Fletcher <amimjf at gmail.com>
> wrote:
> >
> > > Hi,
> > >
> > > I have noticed an issue with lib-bsd that the legacy stack does not
> have.
> > >
> > > If have a loop that does
> > >
> > > for (;;)
> > > {
> > >   wait(100) // milliseconds
> > >   socket() // allocate
> > >   close() // free
> > > }
> > >
> > > then i can see the socket numbers allocated upwards, but eventually the
> > > get ENOBUFS from socket(),.. allocating more sockets just delays the
> > > problem occurring.
> > >
> > > It seems like this is some lazy freeing or complex system designed for
> > > high loading systems to make close() faster, but on an embedded system
> its
> > > malfunctioning.
> > >
> > > is there some lib-bsd function that can force a 'flush' to prevent
> this ?
> > >
> > > --
> > >
> > > regards
> > > ---
> > > Matthew J Fletcher
> > >
> > >
> >
> > --
> >
> > regards
> > ---
> > Matthew J Fletcher
>
> --
> Sebastian Huber, embedded brains GmbH
>
> Address : Dornierstr. 4, D-82178 Puchheim, Germany
> Phone   : +49 89 189 47 41-16
> Fax     : +49 89 189 47 41-09
> E-Mail  : sebastian.huber at embedded-brains.de
> PGP     : Public key available on request.
>
> Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/users/attachments/20190404/8f42aefe/attachment.html>


More information about the users mailing list