RTEMS Network Stack and Managed Switch

Richard.Glossop at L3Harris.com Richard.Glossop at L3Harris.com
Thu Oct 8 16:44:04 UTC 2020


Joel, this is our config:

#define CONFIGURE_EXTRA_TASK_STACKS              (80 * RTEMS_MINIMUM_STACK_SIZE)

/* Note: The following configuration is called out as CONFIGURED_STACK_CHECKER_ENABLED in the users manual,
* but that is a typo. The symbol used in the configuration utility is CONFIGURE_STACK_CHECKER_ENABLED */

#define CONFIGURE_STACK_CHECKER_ENABLED

#define CONFIGURE_INIT_TASK_STACK_SIZE (64*1024)

So stack checker is on.

Not aware of any IPV6 unless the switch itself produces.  We do not use it.

From: Joel Sherrill <joel at rtems.org>
Sent: Thursday, October 8, 2020 12:07 PM
To: Glossop, Richard (US) @ ISR - SSS - SSTC <Richard.Glossop at L3Harris.com>; rtems-devel at rtems.org <devel at rtems.org>
Subject: [EXTERNAL] Re: Re: RTEMS Network Stack and Managed Switch

You didn't reply to the entire list so I am adding it back.

You are using the legacy stack (cpukit/libnetworking) which is 20+ years old and IPV4 only.

There could be a packet which the older stack doesn't like the format of. Perhaps a number pushes something out of range.

Any possibility there is IPV6 on the network? I don't know how this stack reacts to that.

Another simple thing to try is increasing the size of the stacks and turning on the stack checker.

--joel

On Thu, Oct 8, 2020 at 10:39 AM <Richard.Glossop at l3harris.com<mailto:Richard.Glossop at l3harris.com>> wrote:
Hi Joel,

We are using Gaisler’s rc7 release for RTEMS 5.1.  I don’t see libbsd in the map file at all, but not really sure how to tell.

We do have an FPU and our tasks are all FP enabled.  Not sure on the network stack.

From: users <users-bounces at rtems.org<mailto:users-bounces at rtems.org>> On Behalf Of Joel Sherrill
Sent: Thursday, October 8, 2020 11:29 AM
To: Thomas Doerfler <Thomas.Doerfler at imd-systems.de<mailto:Thomas.Doerfler at imd-systems.de>>
Cc: rtems-users at rtems.org<mailto:rtems-users at rtems.org> <users at rtems.org<mailto:users at rtems.org>>
Subject: [EXTERNAL] Re: RTEMS Network Stack and Managed Switch



On Thu, Oct 8, 2020 at 10:25 AM Thomas Doerfler <Thomas.Doerfler at imd-systems.de<mailto:Thomas.Doerfler at imd-systems.de>> wrote:
Richard,

what hardware/BSP are you running on?

Thomas beat me on this. :)

Also the RTEMS version and network stack (legacy vs libbsd).

Just a shot in the dark: It seems the system crashes in arplookup -> ...
-> svfprintf. Can it be that some networking routine tries to print a
message with floating point, but the FPU is disabled for the
corresponding task?

Since I recall you are using a LEON, this is indeed a very likely culprit.

The stack trace is a bit odd since nothing in libc would call _Internal_XXX.

--joel

wkr,

Thomas.

Am 08.10.20 um 17:15 schrieb Richard.Glossop at L3Harris.com<mailto:Richard.Glossop at L3Harris.com>:
> We have discovered a problem with the RTEMS network stack reaching the
> RTEMS _Terminate function when the interfaces are attached to a managed
> switch (in this case a Cisco 3560).
>
>
>
> This does not occur with direct connections or when attached to a layer
> 2 unmanaged switch (NetGear or SMC).  Of course the 3560 puts out a lot
> of traffic that a layer 2 switch does not (spanning tree, CDP etc….)
>
>
>
> So it seems the managed switch is putting out traffic that is bringing
> RTEMS down.
>
>
>
> Has anyone seen this behavior?  Have you determined the root cause?
>
>
>
>
>
> I set a break point and caught the following backtrace:
>
>
>
>   #0   0x6006ec90   0x60200180   <_Terminate+0x4>
>
>   #1   0x6006ece0   0x602001e8   <_Internal_error+0x8>
>
>   #2   0x600da250   0x60200248   <_svfprintf_r+0x14>
>
>   #3   0x600d5810   0x60200420   <snprintf+0x58>
>
>   #4   0x60073bac   0x60200508   <inet_ntop4+0x24>
>
>   #5   0x60073e70   0x60200580   <inet_ntop+0x280>
>
>   #6   0x60073b78   0x60200630   <inet_ntoa_r+0x1c>
>
>   #7   0x60076340   0x60200698   <arplookup+0x78>
>
>   #8   0x60076e34   0x60200710   <arpintr+0x20c>
>
>   #9   0x6008861c   0x602007c0   <networkDaemon+0xa0>
>
>   #10  0x60088164   0x60200828   <taskEntry+0x20>
>
>   #11  0x6006d210   0x60200888   <_Thread_Entry_adaptor_numeric+0x8>
>
>   #12  0x6006c464   0x602008e8   <_Thread_Handler+0x60>
>
>   #13  0x6006c404   0x60200948   <_Thread_Handler+0>
>
>
>
>
>
> CONFIDENTIALITY NOTICE: This email and any attachments are for the sole
> use of the intended recipient and may contain material that is
> proprietary, confidential, privileged or otherwise legally protected or
> restricted under applicable government laws. Any review, disclosure,
> distributing or other use without expressed permission of the sender is
> strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies without reading, printing, or
> saving.
>
>
> _______________________________________________
> users mailing list
> users at rtems.org<mailto:users at rtems.org>
> http://lists.rtems.org/mailman/listinfo/users
>

--
IMD Ingenieurbuero fuer Microcomputertechnik
Thomas Doerfler           Herbststrasse 8
D-82178 Puchheim          Germany
email:    Thomas.Doerfler at imd-systems.de<mailto:Thomas.Doerfler at imd-systems.de>
PGP public key available on request
_______________________________________________
users mailing list
users at rtems.org<mailto:users at rtems.org>
http://lists.rtems.org/mailman/listinfo/users


CONFIDENTIALITY NOTICE: This email and any attachments are for the sole use of the intended recipient and may contain material that is proprietary, confidential, privileged or otherwise legally protected or restricted under applicable government laws. Any review, disclosure, distributing or other use without expressed permission of the sender is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies without reading, printing, or saving.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/users/attachments/20201008/fd5dd5ec/attachment-0001.html>


More information about the users mailing list