Receive task crash on Etherlink driver (elnk.c)
SAeeD
salpha.2004 at gmail.com
Mon Jun 3 16:57:05 UTC 2013
Hello,
I guess I'd be better to come up with some specific questions:
1. In function elnk_init, there is a line which I think sets rx_rings
address in the network controller registers:
CSR_WRITE_4(sc, XL_UPLIST_PTR, phys_to_bus( sc->curr_rx_md ));
OK, upon receive of each packet, sc->curr_rx_md will update but there
is not mention of writing the new address in XL_UPLIST_PTR again. I
assume it would be update automatically as curr_rx_md is a pointer. Is
my assumption right?
2. What happens when the received packet rates exceeds so very much
that rxDaemon can not update curr_rx_md before arrival of a new
packet?
3. As I know, elnk uses DMA to place the received packet in the
memory. It places the packet in the mbuf, but what happens when a
packet is larger than a single mbuf? How does the DMA breaks the
packet to fit in a mbuf chain, rather than a single mbuf? How does
next mbuf address pointer set?
4. Where can I find more details about implementation of DMA
controller driver in RTEMS for pc386 BSP?
Even few rays of light would enlighten me very much! :)
Thanks,
SAeeD
On 5/28/13, SAeeD <salpha.2004 at gmail.com> wrote:
> Hello,
>
> I'm using elnk driver for a 3COM 3C905C-TX device on PCI bus, and I'm
> using rtems-4.10.2.
> In my configuration, a Linux-based UDP client application is sending
> UDP packets with rate of 8600 Packets Per Second (PPS). Each packet
> has a number as payload, starting from 1.
> On the server side, an RTEMS-based UDP server is receiving packets.
> The problem is that after receiving about 500 packets (of 86000 total
> packets), the RTEMS UDP server doesn't receive further packets. I'm
> using default rbuf_count, xbuf_count, mbuf_bytes, and
> mbuf_cluster_bytes.
> I must mention several facts, I investigated through my experiments:
>
> 1. With lower send rates (like 4200 PPS or lower), the UDP server
> works smoothly.
>
> 2. With the rate of 8600 PPS, the elnk_rxdaemon crashes after receving
> about 500 packets. As it is not able to recevie further packets, but
> is able to send packets which mean both elnk_interrupt_handler and
> elnk_txdaemon work properly after the crash.
>
> 3. As I increased rbuf_count, the application crashed after receiving
> more than 500 packets. So the crash threshold was improved.
>
> 4. As I printed the network card status, I found out that when sending
> 86000 packets with rate of 8600 PPS, there were about 60000
> rx_overrun.
>
> Now, does any body have similiar experience with this driver? As I
> searched the mailing list, there were similiar problems with elnk, but
> with transmit, not receive.
> And if any same experience, what is the solution to jump out of this
> condition?
>
> I also have one more question: what is your opinion for the reason of
> the crash? Something like buffer overflow, maybe?
>
> Thanks for your attention,
> SAeeD
>
More information about the users
mailing list