Xilinx IP core drivers for RTEMS- Keith Robertson still around?

Keith Robertson kjrobert at alumni.uwaterloo.ca
Wed May 16 15:21:09 UTC 2007


junercb wrote:
> I use the your lates opbint.c, then it work except a little problem. 
> Thanks very much!
> void xilTemacStart(struct ifnet *ifp)
> {
> .......
>     /* Set the link speed */
>     emcfg |= XIL_TEMAC_ECFG_LINKSPD_100;
> .......
> }
> it doesn't work,so I change XIL_TEMAC_ECFG_LINKSPD_100 to 
> XIL_TEMAC_ECFG_LINKSPD_1000; Then it work.

I have a 100 Mbps phy.  If you a 1Gbps phy, then this would be correct.

> but now i send udp from ML403 to my host-PC(Marvell Yukon 88E8053 PCI-E 
> Gigabit Ethernet Controller),
> from "windows task mananger" the net only use 0.58% of 1G. Why so low?
> what's your percent of your test?

I suspect that using windows task manager is not the best way to 
determine your throughput.  There are some tftp tests in the rtems 
directories that might prove useful.  My own tests indicate that I can 
send large (9000 byte) udp packets at around 80 Mbps (not including 
udp/ip overhead) which was more than acceptable for our use case.  Also, 
given that we are using a 100 Mbps phy, this seemed to me to be pushing 
the practical limit of 100 Mbps.

Also
$ ping -s 65507 -f
shows that my board can sustain approx 40 Mbps in and 40 Mbps out.

All this was with a 200 MHz powerpc in interrupt driven mode.  I suspect 
you'll probably get better utilisation using dma and with the ppc set to 
a higher clock frequency.

There's one (known) area of improvement that could be made in the driver 
to make it _almost_ zero copy (for tx).  Currently it copies the data 
out of the mbuf to a 64 bit aligned buffer for the xilTemacFifoWrite64 
function.  Through some open/close packet semantics one might only have 
to copy the last couple of bytes to a temporary, aligned buffer.  The 
middle majority could be written straight to the fifo without the 
intermediate copy.  I plan to do this in the nearish future but haven't 
got around to it yet.

Cheers.

Keith



More information about the users mailing list