<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jun 2, 2022, 9:58 AM Christian MAUDERER <<a href="mailto:christian.mauderer@embedded-brains.de">christian.mauderer@embedded-brains.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Am 02.06.22 um 16:19 schrieb Joel Sherrill:<br>
> <br>
> <br>
> On Thu, Jun 2, 2022 at 8:58 AM Christian MAUDERER <br>
> <<a href="mailto:christian.mauderer@embedded-brains.de" target="_blank" rel="noreferrer">christian.mauderer@embedded-brains.de</a> <br>
> <mailto:<a href="mailto:christian.mauderer@embedded-brains.de" target="_blank" rel="noreferrer">christian.mauderer@embedded-brains.de</a>>> wrote:<br>
> <br>
> Am 02.06.22 um 15:49 schrieb Gedare Bloom:<br>
> > On Thu, Jun 2, 2022 at 2:28 AM Sebastian Huber<br>
> > <<a href="mailto:sebastian.huber@embedded-brains.de" target="_blank" rel="noreferrer">sebastian.huber@embedded-brains.de</a><br>
> <mailto:<a href="mailto:sebastian.huber@embedded-brains.de" target="_blank" rel="noreferrer">sebastian.huber@embedded-brains.de</a>>> wrote:<br>
> >><br>
> >> On 02/06/2022 09:27, Christian MAUDERER wrote:<br>
> >>><br>
> >>> Am 01.06.22 um 14:46 schrieb Gedare Bloom:<br>
> >>>> On Mon, May 23, 2022 at 6:21 AM Christian Mauderer<br>
> >>>> <<a href="mailto:christian.mauderer@embedded-brains.de" target="_blank" rel="noreferrer">christian.mauderer@embedded-brains.de</a><br>
> <mailto:<a href="mailto:christian.mauderer@embedded-brains.de" target="_blank" rel="noreferrer">christian.mauderer@embedded-brains.de</a>>> wrote:<br>
> >>>>><br>
> >>>>> Typical embedded systems don't have that much memory. Reduce<br>
> the buffer<br>
> >>>>> size to something more sensible for the usual type of<br>
> application.<br>
> >>>>> ---<br>
> >>>>> freebsd/sys/dev/ffec/if_ffec.c | 8 ++++++++<br>
> >>>>> 1 file changed, 8 insertions(+)<br>
> >>>>><br>
> >>>>> diff --git a/freebsd/sys/dev/ffec/if_ffec.c<br>
> >>>>> b/freebsd/sys/dev/ffec/if_ffec.c<br>
> >>>>> index 47c0f770..4c1e147b 100644<br>
> >>>>> --- a/freebsd/sys/dev/ffec/if_ffec.c<br>
> >>>>> +++ b/freebsd/sys/dev/ffec/if_ffec.c<br>
> >>>>> @@ -139,9 +139,17 @@ static struct ofw_compat_data<br>
> compat_data[] = {<br>
> >>>>> /*<br>
> >>>>> * Driver data and defines. The descriptor counts must be<br>
> a power<br>
> >>>>> of two.<br>
> >>>>> */<br>
> >>>>> +#ifndef __rtems__<br>
> >>>>> #define RX_DESC_COUNT 512<br>
> >>>>> +#else /* __rtems__ */<br>
> >>>>> +#define RX_DESC_COUNT 64<br>
> >>>>> +#endif /* __rtems__ */<br>
> >>>><br>
> >>>> Do we need some way to control this parameter? Or, how will this<br>
> >>>> appear if it breaks something?<br>
> >>><br>
> >>> I don't expect that there will be any problems. But I can take<br>
> a look<br>
> >>> how I can make that a parameter.<br>
> >><br>
> >> Can we please keep this a compile time constant as it is. The 64<br>
> >> descriptors should be more than enough.<br>
> >><br>
> > I don't mind the reduction of the constant, but it would be good to<br>
> > predict what behavior might indicate this was exceeded. I guess it<br>
> > should be some kind of errno on an allocation request though? So it<br>
> > should be fine, but if a user hits this limit, I guess they have<br>
> > pretty limited options to overcome it.<br>
> <br>
> Reducing the limit won't cause errors. It will only means that if you<br>
> flood the target with network packets, it will cache less packets and<br>
> start dropping them earlier. That means:<br>
> <br>
> On a short packet burst, some packets will be dropped and (for TCP)<br>
> some<br>
> have to be re-transmitted. So for short bursts it can be a slight<br>
> disadvantage.<br>
> <br>
> On a constant overload situation: It doesn't really make a difference<br>
> because the target wouldn't be able to process the packages anyway. It<br>
> might even is an advantage because the processor doesn't have to<br>
> process<br>
> packets that are already outdated and maybe re-transmitted.<br>
> <br>
> <br>
> How much RAM does this save versus having control over the size of<br>
> UDP and TCP RX/TX buffers like we had in the legacy stack? I recall<br>
> being able to control the various buffer sizes saved a LOT of memory<br>
> on applications I used these parameters on.<br>
> <br>
> There we had four configuration values. Any chance this has a hint<br>
> in FreeBSD now or we can provide the same tuning?<br>
> <br>
> rtems_set_udp_buffer_sizes(<br>
> rtems_bsdnet_config.udp_tx_buf_size,<br>
> rtems_bsdnet_config.udp_rx_buf_size<br>
> );<br>
> <br>
> rtems_set_tcp_buffer_sizes(<br>
> rtems_bsdnet_config.tcp_tx_buf_size,<br>
> rtems_bsdnet_config.tcp_rx_buf_size<br>
> );<br>
> <br>
<br>
Are you sure that this is the same buffer? The parameter in this patch <br>
is a driver specific ring buffer of rx descriptors. The parameter that <br>
you mention sounds more like a general network stack buffer (although I <br>
have to say that I don't know these functions of the old stack).<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I know it isn't the same buffers. It is just likely this has an impact also on fitting into a lower ram environment. And would be general not specific to a driver.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Regarding the sizes:<br>
<br>
The driver allocates one mbuf for each buffer. It's a bit tricky to tell <br>
exactly how big one MBUF is. FreeBSD does a lot of abstraction there. <br>
But a debugger tells me that after the initialization one buffer is:<br>
<br>
sc->rxbuf_map[0].mbuf->m_len = 2048<br>
<br>
That means that I reduced the buffers that are cached in the driver for <br>
sending data from 512 * 2kiB = 1MiB to 64 * 2kiB = 128kiB for the <br>
receive direction. Note that our default size for all mbufs in the stack <br>
is 8MiB (RTEMS_BSD_ALLOCATOR_DOMAIN_PAGE_MBUF_DEFAULT). So 1MiB is a <br>
relevant part of that. And that's only for one direction!<br>
<br>
The Tx buffers only have some management information allocated. They <br>
will get buffers as soon as there is something to send. But if the <br>
device can't send fast enough to get rid of the data, it will be most <br>
likely a similar amount of memory.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Yep. Just wondering if more was needed.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Again: That's only the buffers in the driver. Not any buffers on higher <br>
layers.<br>
<br>
Best regards<br>
<br>
Christian<br>
<br>
> --joel<br>
> <br>
> <br>
> Best regards<br>
> <br>
> Christian<br>
<br>
<br>
-- <br>
--------------------------------------------<br>
embedded brains GmbH<br>
Herr Christian MAUDERER<br>
Dornierstr. 4<br>
82178 Puchheim<br>
Germany<br>
email: <a href="mailto:christian.mauderer@embedded-brains.de" target="_blank" rel="noreferrer">christian.mauderer@embedded-brains.de</a><br>
phone: +49-89-18 94 741 - 18<br>
fax: +49-89-18 94 741 - 08<br>
<br>
Registergericht: Amtsgericht München<br>
Registernummer: HRB 157899<br>
Vertretungsberechtigte Geschäftsführer: Peter Rasmussen, Thomas Dörfler<br>
Unsere Datenschutzerklärung finden Sie hier:<br>
<a href="https://embedded-brains.de/datenschutzerklaerung/" rel="noreferrer noreferrer" target="_blank">https://embedded-brains.de/datenschutzerklaerung/</a><br>
</blockquote></div></div></div>