<div dir="ltr">I'm ok with this.<div><br></div><div>Is there anything documentation-ish that needs updating? README? User Guide info on the BSP? </div><div><br></div><div>--joel</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Jul 18, 2023 at 10:37 AM Vijay Kumar Banerjee <<a href="mailto:vijay@rtems.org">vijay@rtems.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">---<br>
 bsp_drivers.py                                |    2 +<br>
 bsps/powerpc/beatnik/net/if_mve/mv643xx_eth.c | 3318 +++++++++++++++++<br>
 2 files changed, 3320 insertions(+)<br>
 create mode 100644 bsps/powerpc/beatnik/net/if_mve/mv643xx_eth.c<br>
<br>
diff --git a/bsp_drivers.py b/bsp_drivers.py<br>
index e2250aa..5628ff3 100644<br>
--- a/bsp_drivers.py<br>
+++ b/bsp_drivers.py<br>
@@ -79,6 +79,7 @@ include = {<br>
         'bsps/powerpc/beatnik/net',<br>
         'bsps/powerpc/beatnik/net/if_em',<br>
         'bsps/powerpc/beatnik/net/if_gfe',<br>
+        'bsps/powerpc/beatnik/net/if_mve',<br>
         'bsps/powerpc/beatnik/net/porting',<br>
     ],<br>
     'powerpc/mpc8260ads': [<br>
@@ -174,6 +175,7 @@ source = {<br>
         'bsps/powerpc/beatnik/net/if_em/if_em_rtems.c',<br>
         'bsps/powerpc/beatnik/net/if_gfe/if_gfe.c',<br>
         'bsps/powerpc/beatnik/net/if_gfe/if_gfe_rtems.c',<br>
+        'bsps/powerpc/beatnik/net/if_mve/mv643xx_eth.c',<br>
         'bsps/powerpc/beatnik/net/porting/if_xxx_rtems.c',<br>
         'bsps/powerpc/beatnik/net/support/bsp_attach.c',<br>
         'bsps/powerpc/beatnik/net/support/early_link_status.c',<br>
diff --git a/bsps/powerpc/beatnik/net/if_mve/mv643xx_eth.c b/bsps/powerpc/beatnik/net/if_mve/mv643xx_eth.c<br>
new file mode 100644<br>
index 0000000..85ab038<br>
--- /dev/null<br>
+++ b/bsps/powerpc/beatnik/net/if_mve/mv643xx_eth.c<br>
@@ -0,0 +1,3318 @@<br>
+/* RTEMS driver for the mv643xx gigabit ethernet chip */<br>
+<br>
+/* Acknowledgement:<br>
+ *<br>
+ * Valuable information for developing this driver was obtained<br>
+ * from the linux open-source driver mv643xx_eth.c which was written<br>
+ * by the following people and organizations:<br>
+ *<br>
+ * Matthew Dharm <<a href="mailto:mdharm@momenco.com" target="_blank">mdharm@momenco.com</a>><br>
+ * <a href="mailto:rabeeh@galileo.co.il" target="_blank">rabeeh@galileo.co.il</a><br>
+ * PMC-Sierra, Inc., Manish Lachwani<br>
+ * Ralf Baechle <<a href="mailto:ralf@linux-mips.org" target="_blank">ralf@linux-mips.org</a>><br>
+ * MontaVista Software, Inc., Dale Farnsworth <<a href="mailto:dale@farnsworth.org" target="_blank">dale@farnsworth.org</a>><br>
+ * Steven J. Hill <<a href="mailto:sjhill1@rockwellcollins.com" target="_blank">sjhill1@rockwellcollins.com</a>>/<<a href="mailto:sjhill@realitydiluted.com" target="_blank">sjhill@realitydiluted.com</a>><br>
+ *<br>
+ * Note however, that in spite of the identical name of this file<br>
+ * (and some of the symbols used herein) this file provides a<br>
+ * new implementation and is the original work by the author.<br>
+ */<br>
+<br>
+/* <br>
+ * Authorship<br>
+ * ----------<br>
+ * This software (mv643xx ethernet driver for RTEMS) was<br>
+ *     created by Till Straumann <<a href="mailto:strauman@slac.stanford.edu" target="_blank">strauman@slac.stanford.edu</a>>, 2005-2007,<br>
+ *        Stanford Linear Accelerator Center, Stanford University.<br>
+ * <br>
+ * Acknowledgement of sponsorship<br>
+ * ------------------------------<br>
+ * The 'mv643xx ethernet driver for RTEMS' was produced by<br>
+ *     the Stanford Linear Accelerator Center, Stanford University,<br>
+ *        under Contract DE-AC03-76SFO0515 with the Department of Energy.<br>
+ * <br>
+ * Government disclaimer of liability<br>
+ * ----------------------------------<br>
+ * Neither the United States nor the United States Department of Energy,<br>
+ * nor any of their employees, makes any warranty, express or implied, or<br>
+ * assumes any legal liability or responsibility for the accuracy,<br>
+ * completeness, or usefulness of any data, apparatus, product, or process<br>
+ * disclosed, or represents that its use would not infringe privately owned<br>
+ * rights.<br>
+ * <br>
+ * Stanford disclaimer of liability<br>
+ * --------------------------------<br>
+ * Stanford University makes no representations or warranties, express or<br>
+ * implied, nor assumes any liability for the use of this software.<br>
+ * <br>
+ * Stanford disclaimer of copyright<br>
+ * --------------------------------<br>
+ * Stanford University, owner of the copyright, hereby disclaims its<br>
+ * copyright and all other rights in this software.  Hence, anyone may<br>
+ * freely use it for any purpose without restriction.  <br>
+ * <br>
+ * Maintenance of notices<br>
+ * ----------------------<br>
+ * In the interest of clarity regarding the origin and status of this<br>
+ * SLAC software, this and all the preceding Stanford University notices<br>
+ * are to remain affixed to any copy or derivative of this software made<br>
+ * or distributed by the recipient and are to be affixed to any copy of<br>
+ * software made or distributed by the recipient that contains a copy or<br>
+ * derivative of this software.<br>
+ * <br>
+ * ------------------ SLAC Software Notices, Set 4 OTT.002a, 2004 FEB 03<br>
+ */ <br>
+<br>
+/*<br>
+ * NOTE: Some register (e.g., the SMI register) are SHARED among the<br>
+ *       three devices. Concurrent access protection is provided by<br>
+ *       the global networking semaphore.<br>
+ *       If other drivers are running on a subset of IFs then proper<br>
+ *       locking of all shared registers must be implemented!<br>
+ *<br>
+ *       Some things I learned about this hardware can be found<br>
+ *       further down...<br>
+ */<br>
+<br>
+#ifndef KERNEL<br>
+#define KERNEL<br>
+#endif<br>
+#ifndef _KERNEL<br>
+#define _KERNEL<br>
+#endif<br>
+<br>
+#include <rtems.h><br>
+#include <rtems/bspIo.h><br>
+#include <rtems/error.h><br>
+#include <bsp.h><br>
+#include <bsp/irq.h><br>
+#include <bsp/gtreg.h><br>
+#include <libcpu/byteorder.h><br>
+<br>
+#include <sys/param.h><br>
+#include <sys/proc.h><br>
+#include <sys/socket.h><br>
+#include <sys/sockio.h><br>
+#include <dev/mii/mii.h><br>
+#include <net/if_var.h><br>
+#include <net/if_media.h><br>
+<br>
+/* Not so nice; would be more elegant not to depend on C library but the<br>
+ * RTEMS-specific ioctl for dumping statistics needs stdio anyways.<br>
+ */<br>
+<br>
+/*#define NDEBUG effectively removes all assertions<br>
+ * If defining NDEBUG, MAKE SURE assert() EXPRESSION HAVE NO SIDE_EFFECTS!!<br>
+ * This driver DOES have side-effects, so DONT DEFINE NDEBUG<br>
+ * Performance-critical assertions are removed by undefining MVETH_TESTING.<br>
+ */<br>
+<br>
+#undef NDEBUG<br>
+#include <assert.h><br>
+#include <stdio.h><br>
+#include <errno.h><br>
+#include <inttypes.h><br>
+<br>
+#include <rtems/rtems_bsdnet.h><br>
+#include <sys/param.h><br>
+#include <sys/mbuf.h><br>
+#include <sys/socket.h><br>
+#include <sys/sockio.h><br>
+#include <net/ethernet.h><br>
+#include <net/if.h><br>
+#include <netinet/in.h><br>
+#include <netinet/if_ether.h><br>
+<br>
+#include <rtems/rtems_mii_ioctl.h><br>
+#include <bsp/early_enet_link_status.h><br>
+#include <bsp/if_mve_pub.h><br>
+<br>
+/* CONFIGURABLE PARAMETERS */<br>
+<br>
+/* Enable Hardware Snooping; if this is disabled (undefined),<br>
+ * cache coherency is maintained by software.<br>
+ */<br>
+#undef  ENABLE_HW_SNOOPING<br>
+<br>
+/* Compile-time debugging features */<br>
+<br>
+/* Enable paranoia assertions and checks; reduce # of descriptors to minimum for stressing   */<br>
+#undef  MVETH_TESTING<br>
+<br>
+/* Enable debugging messages and some support routines  (dump rings etc.)                    */      <br>
+#undef  MVETH_DEBUG<br>
+<br>
+/* Hack for driver development; rtems bsdnet doesn't implement detaching an interface :-(<br>
+ * but this hack allows us to unload/reload the driver module which makes development<br>
+ * a lot less painful.<br>
+ */<br>
+#undef MVETH_DETACH_HACK<br>
+<br>
+/* Ring sizes */<br>
+<br>
+#ifdef MVETH_TESTING<br>
+<br>
+/* hard and small defaults */<br>
+#undef  MV643XX_RX_RING_SIZE<br>
+#define MV643XX_RX_RING_SIZE   2<br>
+#undef  MV643XX_TX_RING_SIZE<br>
+#define MV643XX_TX_RING_SIZE   4<br>
+<br>
+#else /* MVETH_TESTING */<br>
+<br>
+/* Define default ring sizes, allow override from bsp.h, Makefile,... and from ifcfg->rbuf_count/xbuf_count */<br>
+<br>
+#ifndef MV643XX_RX_RING_SIZE<br>
+#define MV643XX_RX_RING_SIZE   40      /* attached buffers are always 2k clusters, i.e., this<br>
+                                                                        * driver - with a configured ring size of 40 - constantly<br>
+                                                                        * locks 80k of cluster memory - your app config better<br>
+                                                                        * provides enough space!<br>
+                                                                        */<br>
+#endif<br>
+<br>
+#ifndef MV643XX_TX_RING_SIZE<br>
+/* NOTE: tx ring size MUST be > max. # of fragments / mbufs in a chain;<br>
+ *       in 'TESTING' mode, special code is compiled in to repackage<br>
+ *              chains that are longer than the ring size. Normally, this is<br>
+ *              disabled for sake of speed.<br>
+ *              I observed chains of >17 entries regularly!<br>
+ *<br>
+ *       Also, TX_NUM_TAG_SLOTS (1) must be left empty as a marker, hence<br>
+ *       the ring size must be > max. #frags + 1.<br>
+ */<br>
+#define MV643XX_TX_RING_SIZE   200     /* these are smaller fragments and not occupied when<br>
+                                                                        * the driver is idle.<br>
+                                                                        */<br>
+#endif<br>
+<br>
+#endif /* MVETH_TESTING */<br>
+<br>
+/* How many instances to we support (bsp.h could override) */<br>
+#ifndef MV643XXETH_NUM_DRIVER_SLOTS<br>
+#define MV643XXETH_NUM_DRIVER_SLOTS    2<br>
+#endif<br>
+<br>
+#define TX_NUM_TAG_SLOTS                       1 /* leave room for tag; must not be 0 */<br>
+<br>
+/* This is REAL; chip reads from 64-bit down-aligned buffer<br>
+ * if the buffer size is < 8 !!! for buffer sizes 8 and upwards<br>
+ * alignment is not an issue. This was verified using the<br>
+ * 'mve_smallbuf_test.c'<br>
+ */<br>
+#define ENABLE_TX_WORKAROUND_8_BYTE_PROBLEM<br>
+<br>
+/* Chip register configuration values */<br>
+#define        MVETH_PORT_CONFIG_VAL                   (0                              \<br>
+                       | MV643XX_ETH_DFLT_RX_Q(0)                                      \<br>
+                       | MV643XX_ETH_DFLT_RX_ARP_Q(0)                          \<br>
+                       | MV643XX_ETH_DFLT_RX_TCP_Q(0)                          \<br>
+                       | MV643XX_ETH_DFLT_RX_UDP_Q(0)                          \<br>
+                       | MV643XX_ETH_DFLT_RX_BPDU_Q(0)                         \<br>
+                       )<br>
+<br>
+<br>
+#define        MVETH_PORT_XTEND_CONFIG_VAL             0<br>
+<br>
+#ifdef OLDCONFIGVAL<br>
+#define        MVETH_SERIAL_CTRL_CONFIG_VAL    (0                              \<br>
+                        | MV643XX_ETH_FORCE_LINK_PASS                          \<br>
+                        | MV643XX_ETH_DISABLE_AUTO_NEG_FOR_FLOWCTL     \<br>
+                        | MV643XX_ETH_ADVERTISE_SYMMETRIC_FLOWCTL      \<br>
+                        | MV643XX_ETH_BIT9_UNKNOWN                                     \<br>
+                        | MV643XX_ETH_FORCE_LINK_FAIL_DISABLE          \<br>
+                        | MV643XX_ETH_SC_MAX_RX_1552                           \<br>
+                        | MV643XX_ETH_SET_FULL_DUPLEX                          \<br>
+                        | MV643XX_ETH_ENBL_FLOWCTL_TX_RX_IN_FD         \<br>
+                        )<br>
+#endif<br>
+/* If we enable autoneg (duplex, speed, ...) then it seems<br>
+ * that the chip automatically updates link settings<br>
+ * (correct link settings are reflected in PORT_STATUS_R).<br>
+ * However, when we disable aneg in the PHY then things<br>
+ * can get messed up and the port doesn't work anymore.<br>
+ * => we follow the linux driver in disabling all aneg<br>
+ * in the serial config reg. and manually updating the<br>
+ * speed & duplex bits when the phy link status changes.<br>
+ * FIXME: don't know what to do about pause/flow-ctrl.<br>
+ * It is best to just use ANEG anyways!!!<br>
+ */<br>
+#define        MVETH_SERIAL_CTRL_CONFIG_VAL    (0                              \<br>
+                        | MV643XX_ETH_DISABLE_AUTO_NEG_FOR_DUPLEX      \<br>
+                        | MV643XX_ETH_DISABLE_AUTO_NEG_FOR_FLOWCTL     \<br>
+                        | MV643XX_ETH_ADVERTISE_SYMMETRIC_FLOWCTL      \<br>
+                        | MV643XX_ETH_BIT9_UNKNOWN                                     \<br>
+                        | MV643XX_ETH_FORCE_LINK_FAIL_DISABLE          \<br>
+                        | MV643XX_ETH_DISABLE_AUTO_NEG_SPEED_GMII      \<br>
+                        | MV643XX_ETH_SC_MAX_RX_1552                           \<br>
+                        )<br>
+<br>
+#define        MVETH_SERIAL_CTRL_CONFIG_MSK    (0                              \<br>
+                        | MV643XX_ETH_SERIAL_PORT_ENBL                         \<br>
+                        | MV643XX_ETH_FORCE_LINK_PASS                          \<br>
+                        | MV643XX_ETH_SC_MAX_RX_MASK                           \<br>
+                        )<br>
+<br>
+<br>
+#ifdef __PPC__<br>
+#define MVETH_SDMA_CONFIG_VAL                  (0                              \<br>
+                       | MV643XX_ETH_RX_BURST_SZ_4_64BIT                       \<br>
+                       | MV643XX_ETH_TX_BURST_SZ_4_64BIT                       \<br>
+                       )<br>
+#else<br>
+#define MVETH_SDMA_CONFIG_VAL                  (0                              \<br>
+                       | MV643XX_ETH_RX_BURST_SZ_16_64BIT                      \<br>
+                       | MV643XX_ETH_TX_BURST_SZ_16_64BIT                      \<br>
+                       )<br>
+#endif<br>
+<br>
+/* minimal frame size we accept */<br>
+#define MVETH_MIN_FRAMSZ_CONFIG_VAL    40<br>
+<br>
+/* END OF CONFIGURABLE SECTION */<br>
+<br>
+/*<br>
+ * Here's stuff I learned about this chip:<br>
+ *<br>
+ *<br>
+ * RX interrupt flags:<br>
+ *<br>
+ * broadcast packet RX: 0x00000005<br>
+ *           last buf:  0x00000c05<br>
+ *           overrun:   0x00000c00           <br>
+ * unicast   packet RX: 0x00000005<br>
+ * bad CRC received:    0x00000005<br>
+ *<br>
+ * clearing 0x00000004 -> clears 0x00000001<br>
+ * clearing 0x00000400 -> clears 0x00000800<br>
+ *<br>
+ * --> 0x0801 are probably some sort of summary bits.<br>
+ *<br>
+ * TX interrupt flags:<br>
+ *<br>
+ * broadcast packet in 1 buf: xcause: 0x00000001 (cause 0x00080000)<br>
+ *        into disconn. link:             "                 "<br>
+ *<br>
+ * in some cases, I observed  xcause: 0x00000101 (reason for 0x100 unknown<br>
+ * but the linux driver accepts it also).<br>
+ *<br>
+ *<br>
+ * Here a few more ugly things about this piece of hardware I learned<br>
+ * (painfully, painfully; spending many many hours & nights :-()<br>
+ *<br>
+ * a) Especially in the case of 'chained' descriptors, the DMA keeps<br>
+ *    clobbering 'cmd_sts' long after it cleared the OWNership flag!!!<br>
+ *    Only after the whole chain is processed (OWN cleared on the<br>
+ *    last descriptor) it is safe to change cmd_sts.<br>
+ *    However, in the case of hardware snooping I found that the<br>
+ *    last descriptor in chain has its cmd_sts still clobbered *after*<br>
+ *    checking ownership!, I.e.,<br>
+ *        if ( ! OWN & cmd_sts ) {<br>
+ *            cmd_sts = 0;<br>
+ *        }<br>
+ *    --> sometimes, cmd_sts is STILL != 0 here<br>
+ *<br>
+ * b) Sometimes, the OWNership flag is *not cleared*.  <br>
+ * <br>
+ * c) Weird things happen if the chip finds a descriptor with 'OWN'<br>
+ *    still set (i.e., not properly loaded), i.e., corrupted packets<br>
+ *    are sent [with OK checksum since the chip calculates it]. <br>
+ *<br>
+ * Combine a+b+c and we end up with a real mess.<br>
+ *<br>
+ * The fact that the chip doesn't reliably reset OWN and that OTOH,<br>
+ * it can't be reliably reset by the driver and still, the chip needs<br>
+ * it for proper communication doesn't make things easy...<br>
+ *<br>
+ * Here the basic workarounds:<br>
+ *<br>
+ *     - In addition to check OWN, the scavenger compares the "currently<br>
+ *       served desc" register to the descriptor it tries to recover and<br>
+ *       ignores OWN if they do not match. Hope this is OK.<br>
+ *       Otherwise, we could scan the list of used descriptors and proceed<br>
+ *       recycling descriptors if we find a !OWNed one behind the target...<br>
+ *<br>
+ *     - Always keep an empty slot around to mark the end of the list of<br>
+ *       jobs. The driver clears the descriptor ahead when enqueueing a new<br>
+ *       packet.<br>
+ */<br>
+<br>
+#define DRVNAME                        "mve"<br>
+#define MAX_NUM_SLOTS  3<br>
+<br>
+#if MV643XXETH_NUM_DRIVER_SLOTS > MAX_NUM_SLOTS<br>
+#error "mv643xxeth: only MAX_NUM_SLOTS supported"<br>
+#endif<br>
+<br>
+#ifdef NDEBUG<br>
+#error "Driver uses assert() statements with side-effects; MUST NOT define NDEBUG"<br>
+#endif<br>
+<br>
+#ifdef MVETH_DEBUG<br>
+#define STATIC<br>
+#else<br>
+#define STATIC static<br>
+#endif<br>
+<br>
+#define TX_AVAILABLE_RING_SIZE(mp)             ((mp)->xbuf_count - (TX_NUM_TAG_SLOTS))<br>
+<br>
+/* macros for ring alignment; proper alignment is a hardware req; . */<br>
+<br>
+#ifdef ENABLE_HW_SNOOPING<br>
+<br>
+#define RING_ALIGNMENT                         16<br>
+/* rx buffers must be 64-bit aligned (chip requirement) */<br>
+#define RX_BUF_ALIGNMENT                       8<br>
+<br>
+#else /* ENABLE_HW_SNOOPING */<br>
+<br>
+/* Software cache management */<br>
+<br>
+#ifndef __PPC__<br>
+#error "Dont' know how to deal with cache on this CPU architecture"<br>
+#endif<br>
+<br>
+/* Ring entries are 32 bytes; coherency-critical chunks are 16 -> software coherency<br>
+ * management works for cache line sizes of 16 and 32 bytes only. If the line size<br>
+ * is bigger, the descriptors could be padded...<br>
+ */<br>
+#if    PPC_CACHE_ALIGMENT != 16 && PPC_CACHE_ALIGNMENT != 32<br>
+#error "Cache line size must be 16 or 32"<br>
+#else<br>
+#define RING_ALIGNMENT                         PPC_CACHE_ALIGNMENT<br>
+#define RX_BUF_ALIGNMENT                       PPC_CACHE_ALIGNMENT<br>
+#endif<br>
+<br>
+#endif /* ENABLE_HW_SNOOPING */<br>
+<br>
+<br>
+/* HELPER MACROS */<br>
+<br>
+/* Align base to alignment 'a' */<br>
+#define MV643XX_ALIGN(b, a)    ((((uint32_t)(b)) + (a)-1) & (~((a)-1)))<br>
+<br>
+#define NOOP()                 do {} while(0)<br>
+<br>
+/* Function like macros */<br>
+#define MV_READ(off) \<br>
+               ld_le32((volatile uint32_t *)(BSP_MV64x60_BASE + (off)))<br>
+#define MV_WRITE(off, data)            \<br>
+               st_le32((volatile uint32_t *)(BSP_MV64x60_BASE + (off)), ((unsigned)data))<br>
+<br>
+<br>
+/* ENET window mapped 1:1 to CPU addresses by our BSP/MotLoad<br>
+ * -- if this is changed, we should think about caching the 'next' and 'buf' pointers.<br>
+ */<br>
+#define CPUADDR2ENET(a) ((Dma_addr_t)(a))<br>
+#define ENET2CPUADDR(a) (a)<br>
+<br>
+#if 1  /* Whether to automatically try to reclaim descriptors when enqueueing new packets */<br>
+#define MVETH_CLEAN_ON_SEND(mp) (BSP_mve_swipe_tx(mp))<br>
+#else<br>
+#define MVETH_CLEAN_ON_SEND(mp) (-1)<br>
+#endif<br>
+<br>
+#define NEXT_TXD(d)    (d)->next<br>
+#define NEXT_RXD(d)    (d)->next<br>
+<br>
+/* REGISTER AND DESCRIPTOR OFFSET AND BIT DEFINITIONS */<br>
+<br>
+/* Descriptor Definitions */<br>
+/* Rx descriptor */<br>
+#define RDESC_ERROR                                                                    (1<< 0) /* Error summary    */<br>
+<br>
+/* Error code (bit 1&2) is only valid if summary bit is set */<br>
+#define RDESC_CRC_ERROR                                                                (    1)<br>
+#define RDESC_OVERRUN_ERROR                                                    (    3)<br>
+#define RDESC_MAX_FRAMELENGTH_ERROR                                    (    5)<br>
+#define RDESC_RESOURCE_ERROR                                           (    7)<br>
+<br>
+#define RDESC_LAST                                                                     (1<<26) /* Last Descriptor   */<br>
+#define RDESC_FRST                                                                     (1<<27) /* First Descriptor  */<br>
+#define RDESC_INT_ENA                                                          (1<<29) /* Enable Interrupts */<br>
+#define RDESC_DMA_OWNED                                                                (1<<31)<br>
+<br>
+/* Tx descriptor */<br>
+#define TDESC_ERROR                                                                    (1<< 0) /* Error summary     */<br>
+#define TDESC_ZERO_PAD                                                         (1<<19)<br>
+#define TDESC_LAST                                                                     (1<<20) /* Last Descriptor   */<br>
+#define TDESC_FRST                                                                     (1<<21) /* First Descriptor  */<br>
+#define TDESC_GEN_CRC                                                          (1<<22)<br>
+#define TDESC_INT_ENA                                                          (1<<23) /* Enable Interrupts */<br>
+#define TDESC_DMA_OWNED                                                                (1<<31)<br>
+<br>
+<br>
+<br>
+/* Register Definitions */<br>
+#define MV643XX_ETH_PHY_ADDR_R                                         (0x2000)<br>
+#define MV643XX_ETH_SMI_R                                                      (0x2004)<br>
+#define MV643XX_ETH_SMI_BUSY                                           (1<<28)<br>
+#define MV643XX_ETH_SMI_VALID                                          (1<<27)<br>
+#define MV643XX_ETH_SMI_OP_WR                                          (0<<26)<br>
+#define MV643XX_ETH_SMI_OP_RD                                          (1<<26)<br>
+<br>
+#define MV643XX_ETH_TRANSMIT_QUEUE_COMMAND_R(port)     (0x2448 + ((port)<<10))<br>
+#define MV643XX_ETH_TX_START(queue)                                    (0x0001<<(queue))<br>
+#define MV643XX_ETH_TX_STOP(queue)                                     (0x0100<<(queue))<br>
+#define MV643XX_ETH_TX_START_M(queues)                         ((queues)&0xff)<br>
+#define MV643XX_ETH_TX_STOP_M(queues)                          (((queues)&0xff)<<8)<br>
+#define MV643XX_ETH_TX_STOP_ALL                                                (0xff00)<br>
+#define MV643XX_ETH_TX_ANY_RUNNING                                     (0x00ff)<br>
+<br>
+#define MV643XX_ETH_RECEIVE_QUEUE_COMMAND_R(port)      (0x2680 + ((port)<<10))<br>
+#define MV643XX_ETH_RX_START(queue)                                    (0x0001<<(queue))<br>
+#define MV643XX_ETH_RX_STOP(queue)                                     (0x0100<<(queue))<br>
+#define MV643XX_ETH_RX_STOP_ALL                                                (0xff00)<br>
+#define MV643XX_ETH_RX_ANY_RUNNING                                     (0x00ff)<br>
+<br>
+#define MV643XX_ETH_CURRENT_SERVED_TX_DESC(port)       (0x2684 + ((port)<<10))<br>
+<br>
+/* The chip puts the ethernet header at offset 2 into the buffer so<br>
+ * that the payload is aligned<br>
+ */<br>
+#define ETH_RX_OFFSET                                                          2<br>
+#define ETH_CRC_LEN                                                                    4       /* strip FCS at end of packet */<br>
+<br>
+<br>
+#define MV643XX_ETH_INTERRUPT_CAUSE_R(port)                    (0x2460 + ((port)<<10))<br>
+/* not fully understood; RX seems to raise 0x0005 or 0x0c05 if last buffer is filled and 0x0c00<br>
+ * if there are no buffers<br>
+ */<br>
+#define MV643XX_ETH_ALL_IRQS                                           (0x0007ffff)<br>
+#define MV643XX_ETH_KNOWN_IRQS                                         (0x00000c05)<br>
+#define MV643XX_ETH_IRQ_EXT_ENA                                                (1<<1)<br>
+#define MV643XX_ETH_IRQ_RX_DONE                                                (1<<2)<br>
+#define MV643XX_ETH_IRQ_RX_NO_DESC                                     (1<<10)<br>
+<br>
+#define MV643XX_ETH_INTERRUPT_EXTEND_CAUSE_R(port)     (0x2464 + ((port)<<10))<br>
+/* not fully understood; TX seems to raise 0x0001 and link change is 0x00010000<br>
+ * if there are no buffers<br>
+ */<br>
+#define MV643XX_ETH_ALL_EXT_IRQS                                       (0x0011ffff)<br>
+#define MV643XX_ETH_KNOWN_EXT_IRQS                                     (0x00010101)<br>
+#define MV643XX_ETH_EXT_IRQ_TX_DONE                                    (1<<0)<br>
+#define MV643XX_ETH_EXT_IRQ_LINK_CHG                           (1<<16)<br>
+#define MV643XX_ETH_INTERRUPT_ENBL_R(port)                     (0x2468 + ((port)<<10))<br>
+#define MV643XX_ETH_INTERRUPT_EXTEND_ENBL_R(port)      (0x246c + ((port)<<10))<br>
+<br>
+/* port configuration */<br>
+#define MV643XX_ETH_PORT_CONFIG_R(port)                                (0x2400 + ((port)<<10))<br>
+#define        MV643XX_ETH_UNICAST_PROMISC_MODE                        (1<<0)<br>
+#define        MV643XX_ETH_DFLT_RX_Q(q)                                        ((q)<<1)<br>
+#define        MV643XX_ETH_DFLT_RX_ARP_Q(q)                            ((q)<<4)<br>
+#define MV643XX_ETH_REJ_BCAST_IF_NOT_IP_OR_ARP         (1<<7)<br>
+#define MV643XX_ETH_REJ_BCAST_IF_IP                                    (1<<8)<br>
+#define MV643XX_ETH_REJ_BCAST_IF_ARP                           (1<<9)<br>
+#define MV643XX_ETH_TX_AM_NO_UPDATE_ERR_SUMMARY                (1<<12)<br>
+#define MV643XX_ETH_CAPTURE_TCP_FRAMES_ENBL                    (1<<14)<br>
+#define MV643XX_ETH_CAPTURE_UDP_FRAMES_ENBL                    (1<<15)<br>
+#define        MV643XX_ETH_DFLT_RX_TCP_Q(q)                            ((q)<<16)<br>
+#define        MV643XX_ETH_DFLT_RX_UDP_Q(q)                            ((q)<<19)<br>
+#define        MV643XX_ETH_DFLT_RX_BPDU_Q(q)                           ((q)<<22)<br>
+<br>
+<br>
+<br>
+#define MV643XX_ETH_PORT_CONFIG_XTEND_R(port)          (0x2404 + ((port)<<10))<br>
+#define MV643XX_ETH_CLASSIFY_ENBL                                      (1<<0)<br>
+#define MV643XX_ETH_SPAN_BPDU_PACKETS_AS_NORMAL                (0<<1)<br>
+#define MV643XX_ETH_SPAN_BPDU_PACKETS_2_Q7                     (1<<1)<br>
+#define MV643XX_ETH_PARTITION_DISBL                                    (0<<2)<br>
+#define MV643XX_ETH_PARTITION_ENBL                                     (1<<2)<br>
+<br>
+#define MV643XX_ETH_SDMA_CONFIG_R(port)                                (0x241c + ((port)<<10))<br>
+#define MV643XX_ETH_SDMA_RIFB                                          (1<<0)<br>
+#define MV643XX_ETH_RX_BURST_SZ_1_64BIT                                (0<<1)<br>
+#define MV643XX_ETH_RX_BURST_SZ_2_64BIT                                (1<<1)<br>
+#define MV643XX_ETH_RX_BURST_SZ_4_64BIT                                (2<<1)<br>
+#define MV643XX_ETH_RX_BURST_SZ_8_64BIT                                (3<<1)<br>
+#define MV643XX_ETH_RX_BURST_SZ_16_64BIT                       (4<<1)<br>
+#define MV643XX_ETH_SMDA_BLM_RX_NO_SWAP                                (1<<4)<br>
+#define MV643XX_ETH_SMDA_BLM_TX_NO_SWAP                                (1<<5)<br>
+#define MV643XX_ETH_SMDA_DESC_BYTE_SWAP                                (1<<6)<br>
+#define MV643XX_ETH_TX_BURST_SZ_1_64BIT                                (0<<22)<br>
+#define MV643XX_ETH_TX_BURST_SZ_2_64BIT                                (1<<22)<br>
+#define MV643XX_ETH_TX_BURST_SZ_4_64BIT                                (2<<22)<br>
+#define MV643XX_ETH_TX_BURST_SZ_8_64BIT                                (3<<22)<br>
+#define MV643XX_ETH_TX_BURST_SZ_16_64BIT                       (4<<22)<br>
+<br>
+#define        MV643XX_ETH_RX_MIN_FRAME_SIZE_R(port)           (0x247c + ((port)<<10))<br>
+<br>
+<br>
+#define MV643XX_ETH_SERIAL_CONTROL_R(port)                     (0x243c + ((port)<<10))<br>
+#define MV643XX_ETH_SERIAL_PORT_ENBL                           (1<<0)  /* Enable serial port */<br>
+#define MV643XX_ETH_FORCE_LINK_PASS                                    (1<<1)<br>
+#define MV643XX_ETH_DISABLE_AUTO_NEG_FOR_DUPLEX                (1<<2)<br>
+#define MV643XX_ETH_DISABLE_AUTO_NEG_FOR_FLOWCTL       (1<<3)<br>
+#define MV643XX_ETH_ADVERTISE_SYMMETRIC_FLOWCTL                (1<<4)<br>
+#define MV643XX_ETH_FORCE_FC_MODE_TX_PAUSE_DIS         (1<<5)<br>
+#define MV643XX_ETH_FORCE_BP_MODE_JAM_TX                       (1<<7)<br>
+#define MV643XX_ETH_FORCE_BP_MODE_JAM_TX_ON_RX_ERR     (1<<8)<br>
+#define MV643XX_ETH_BIT9_UNKNOWN                                       (1<<9)  /* unknown purpose; linux sets this */<br>
+#define MV643XX_ETH_FORCE_LINK_FAIL_DISABLE                    (1<<10)<br>
+#define MV643XX_ETH_RETRANSMIT_FOREVER                         (1<<11) /* limit to 16 attempts if clear    */<br>
+#define MV643XX_ETH_DISABLE_AUTO_NEG_SPEED_GMII                (1<<13)<br>
+#define MV643XX_ETH_DTE_ADV_1                                          (1<<14)<br>
+#define MV643XX_ETH_AUTO_NEG_BYPASS_ENBL                       (1<<15)<br>
+#define MV643XX_ETH_RESTART_AUTO_NEG                           (1<<16)<br>
+#define MV643XX_ETH_SC_MAX_RX_1518                                     (0<<17) /* Limit RX packet size */<br>
+#define MV643XX_ETH_SC_MAX_RX_1522                                     (1<<17) /* Limit RX packet size */<br>
+#define MV643XX_ETH_SC_MAX_RX_1552                                     (2<<17) /* Limit RX packet size */<br>
+#define MV643XX_ETH_SC_MAX_RX_9022                                     (3<<17) /* Limit RX packet size */<br>
+#define MV643XX_ETH_SC_MAX_RX_9192                                     (4<<17) /* Limit RX packet size */<br>
+#define MV643XX_ETH_SC_MAX_RX_9700                                     (5<<17) /* Limit RX packet size */<br>
+#define MV643XX_ETH_SC_MAX_RX_MASK                                     (7<<17) /* bitmask */<br>
+#define MV643XX_ETH_SET_EXT_LOOPBACK                           (1<<20)<br>
+#define MV643XX_ETH_SET_FULL_DUPLEX                                    (1<<21)<br>
+#define MV643XX_ETH_ENBL_FLOWCTL_TX_RX_IN_FD           (1<<22) /* enable flow ctrl on rx and tx in full-duplex */<br>
+#define MV643XX_ETH_SET_GMII_SPEED_1000                                (1<<23) /* 10/100 if clear */<br>
+#define MV643XX_ETH_SET_MII_SPEED_100                          (1<<24) /* 10 if clear     */<br>
+<br>
+#define MV643XX_ETH_PORT_STATUS_R(port)                                (0x2444 + ((port)<<10))<br>
+<br>
+#define MV643XX_ETH_PORT_STATUS_MODE_10_BIT                    (1<<0)<br>
+#define MV643XX_ETH_PORT_STATUS_LINK_UP                                (1<<1)<br>
+#define MV643XX_ETH_PORT_STATUS_FDX                                    (1<<2)<br>
+#define MV643XX_ETH_PORT_STATUS_FC                                     (1<<3)<br>
+#define MV643XX_ETH_PORT_STATUS_1000                           (1<<4)<br>
+#define MV643XX_ETH_PORT_STATUS_100                                    (1<<5)<br>
+/* PSR bit 6 unknown */<br>
+#define MV643XX_ETH_PORT_STATUS_TX_IN_PROGRESS         (1<<7)<br>
+#define MV643XX_ETH_PORT_STATUS_ANEG_BYPASSED          (1<<8)<br>
+#define MV643XX_ETH_PORT_STATUS_PARTITION                      (1<<9)<br>
+#define MV643XX_ETH_PORT_STATUS_TX_FIFO_EMPTY          (1<<10)<br>
+<br>
+#define MV643XX_ETH_MIB_COUNTERS(port)                         (0x3000 + ((port)<<7))<br>
+#define MV643XX_ETH_NUM_MIB_COUNTERS                           32<br>
+<br>
+#define MV643XX_ETH_MIB_GOOD_OCTS_RCVD_LO                      (0)<br>
+#define MV643XX_ETH_MIB_GOOD_OCTS_RCVD_HI                      (1<<2)<br>
+#define MV643XX_ETH_MIB_BAD_OCTS_RCVD                          (2<<2)<br>
+#define MV643XX_ETH_MIB_INTERNAL_MAC_TX_ERR                    (3<<2)<br>
+#define MV643XX_ETH_MIB_GOOD_FRAMES_RCVD                       (4<<2)<br>
+#define MV643XX_ETH_MIB_BAD_FRAMES_RCVD                                (5<<2)<br>
+#define MV643XX_ETH_MIB_BCAST_FRAMES_RCVD                      (6<<2)<br>
+#define MV643XX_ETH_MIB_MCAST_FRAMES_RCVD                      (7<<2)<br>
+#define MV643XX_ETH_MIB_FRAMES_64_OCTS                         (8<<2)<br>
+#define MV643XX_ETH_MIB_FRAMES_65_127_OCTS                     (9<<2)<br>
+#define MV643XX_ETH_MIB_FRAMES_128_255_OCTS                    (10<<2)<br>
+#define MV643XX_ETH_MIB_FRAMES_256_511_OCTS                    (11<<2)<br>
+#define MV643XX_ETH_MIB_FRAMES_512_1023_OCTS           (12<<2)<br>
+#define MV643XX_ETH_MIB_FRAMES_1024_MAX_OCTS           (13<<2)<br>
+#define MV643XX_ETH_MIB_GOOD_OCTS_SENT_LO                      (14<<2)<br>
+#define MV643XX_ETH_MIB_GOOD_OCTS_SENT_HI                      (15<<2)<br>
+#define MV643XX_ETH_MIB_GOOD_FRAMES_SENT                       (16<<2)<br>
+#define MV643XX_ETH_MIB_EXCESSIVE_COLL                         (17<<2)<br>
+#define MV643XX_ETH_MIB_MCAST_FRAMES_SENT                      (18<<2)<br>
+#define MV643XX_ETH_MIB_BCAST_FRAMES_SENT                      (19<<2)<br>
+#define MV643XX_ETH_MIB_UNREC_MAC_CTRL_RCVD                    (20<<2)<br>
+#define MV643XX_ETH_MIB_FC_SENT                                                (21<<2)<br>
+#define MV643XX_ETH_MIB_GOOD_FC_RCVD                           (22<<2)<br>
+#define MV643XX_ETH_MIB_BAD_FC_RCVD                                    (23<<2)<br>
+#define MV643XX_ETH_MIB_UNDERSIZE_RCVD                         (24<<2)<br>
+#define MV643XX_ETH_MIB_FRAGMENTS_RCVD                         (25<<2)<br>
+#define MV643XX_ETH_MIB_OVERSIZE_RCVD                          (26<<2)<br>
+#define MV643XX_ETH_MIB_JABBER_RCVD                                    (27<<2)<br>
+#define MV643XX_ETH_MIB_MAC_RX_ERR                                     (28<<2)<br>
+#define MV643XX_ETH_MIB_BAD_CRC_EVENT                          (29<<2)<br>
+#define MV643XX_ETH_MIB_COLL                                           (30<<2)<br>
+#define MV643XX_ETH_MIB_LATE_COLL                                      (31<<2)<br>
+<br>
+#define MV643XX_ETH_DA_FILTER_SPECL_MCAST_TBL(port) (0x3400+((port)<<10))<br>
+#define MV643XX_ETH_DA_FILTER_OTHER_MCAST_TBL(port) (0x3500+((port)<<10))<br>
+#define MV643XX_ETH_DA_FILTER_UNICAST_TBL(port)                (0x3600+((port)<<10))<br>
+#define MV643XX_ETH_NUM_MCAST_ENTRIES                          64<br>
+#define MV643XX_ETH_NUM_UNICAST_ENTRIES                                4<br>
+<br>
+#define MV643XX_ETH_BAR_0                                                      (0x2200)<br>
+#define MV643XX_ETH_SIZE_R_0                                           (0x2204)<br>
+#define MV643XX_ETH_BAR_1                                                      (0x2208)<br>
+#define MV643XX_ETH_SIZE_R_1                                           (0x220c)<br>
+#define MV643XX_ETH_BAR_2                                                      (0x2210)<br>
+#define MV643XX_ETH_SIZE_R_2                                           (0x2214)<br>
+#define MV643XX_ETH_BAR_3                                                      (0x2218)<br>
+#define MV643XX_ETH_SIZE_R_3                                           (0x221c)<br>
+#define MV643XX_ETH_BAR_4                                                      (0x2220)<br>
+#define MV643XX_ETH_SIZE_R_4                                           (0x2224)<br>
+#define MV643XX_ETH_BAR_5                                                      (0x2228)<br>
+#define MV643XX_ETH_SIZE_R_5                                           (0x222c)<br>
+#define MV643XX_ETH_NUM_BARS                                           6<br>
+<br>
+/* Bits in the BAR reg to program cache snooping */<br>
+#define MV64360_ENET2MEM_SNOOP_NONE 0x0000<br>
+#define MV64360_ENET2MEM_SNOOP_WT      0x1000<br>
+#define MV64360_ENET2MEM_SNOOP_WB      0x2000<br>
+#define MV64360_ENET2MEM_SNOOP_MSK     0x3000<br>
+<br>
+<br>
+#define MV643XX_ETH_BAR_ENBL_R                                         (0x2290)<br>
+#define MV643XX_ETH_BAR_DISABLE(bar)                           (1<<(bar))<br>
+#define MV643XX_ETH_BAR_DISBL_ALL                                      0x3f<br>
+<br>
+#define MV643XX_ETH_RX_Q0_CURRENT_DESC_PTR(port)       (0x260c+((port)<<10))<br>
+#define MV643XX_ETH_RX_Q1_CURRENT_DESC_PTR(port)       (0x261c+((port)<<10))<br>
+#define MV643XX_ETH_RX_Q2_CURRENT_DESC_PTR(port)       (0x262c+((port)<<10))<br>
+#define MV643XX_ETH_RX_Q3_CURRENT_DESC_PTR(port)       (0x263c+((port)<<10))<br>
+#define MV643XX_ETH_RX_Q4_CURRENT_DESC_PTR(port)       (0x264c+((port)<<10))<br>
+#define MV643XX_ETH_RX_Q5_CURRENT_DESC_PTR(port)       (0x265c+((port)<<10))<br>
+#define MV643XX_ETH_RX_Q6_CURRENT_DESC_PTR(port)       (0x266c+((port)<<10))<br>
+#define MV643XX_ETH_RX_Q7_CURRENT_DESC_PTR(port)       (0x267c+((port)<<10))<br>
+<br>
+#define MV643XX_ETH_TX_Q0_CURRENT_DESC_PTR(port)       (0x26c0+((port)<<10))<br>
+#define MV643XX_ETH_TX_Q1_CURRENT_DESC_PTR(port)       (0x26c4+((port)<<10))<br>
+#define MV643XX_ETH_TX_Q2_CURRENT_DESC_PTR(port)       (0x26c8+((port)<<10))<br>
+#define MV643XX_ETH_TX_Q3_CURRENT_DESC_PTR(port)       (0x26cc+((port)<<10))<br>
+#define MV643XX_ETH_TX_Q4_CURRENT_DESC_PTR(port)       (0x26d0+((port)<<10))<br>
+#define MV643XX_ETH_TX_Q5_CURRENT_DESC_PTR(port)       (0x26d4+((port)<<10))<br>
+#define MV643XX_ETH_TX_Q6_CURRENT_DESC_PTR(port)       (0x26d8+((port)<<10))<br>
+#define MV643XX_ETH_TX_Q7_CURRENT_DESC_PTR(port)       (0x26dc+((port)<<10))<br>
+<br>
+#define MV643XX_ETH_MAC_ADDR_LO(port)                          (0x2414+((port)<<10))<br>
+#define MV643XX_ETH_MAC_ADDR_HI(port)                          (0x2418+((port)<<10))<br>
+<br>
+/* TYPE DEFINITIONS */<br>
+<br>
+/* just to make the purpose explicit; vars of this<br>
+ * type may need CPU-dependent address translation,<br>
+ * endian conversion etc.<br>
+ */<br>
+typedef uint32_t Dma_addr_t;<br>
+<br>
+typedef volatile struct mveth_rx_desc {<br>
+#ifndef __BIG_ENDIAN__<br>
+#error "descriptor declaration not implemented for little endian machines"<br>
+#endif<br>
+       uint16_t        byte_cnt;<br>
+       uint16_t        buf_size;<br>
+       uint32_t        cmd_sts;                                        /* control and status */<br>
+       Dma_addr_t      next_desc_ptr;                          /* next descriptor (as seen from DMA) */<br>
+       Dma_addr_t      buf_ptr;<br>
+       /* fields below here are not used by the chip */<br>
+       void            *u_buf;                                         /* user buffer */<br>
+       volatile struct mveth_rx_desc *next;    /* next descriptor (CPU address; next_desc_ptr is a DMA address) */<br>
+       uint32_t        pad[2];<br>
+} __attribute__(( aligned(RING_ALIGNMENT) )) MvEthRxDescRec, *MvEthRxDesc;<br>
+<br>
+typedef volatile struct mveth_tx_desc {<br>
+#ifndef __BIG_ENDIAN__<br>
+#error "descriptor declaration not implemented for little endian machines"<br>
+#endif<br>
+       uint16_t        byte_cnt;<br>
+       uint16_t        l4i_chk;<br>
+       uint32_t        cmd_sts;                                        /* control and status */<br>
+       Dma_addr_t      next_desc_ptr;                          /* next descriptor (as seen from DMA) */<br>
+       Dma_addr_t      buf_ptr;<br>
+       /* fields below here are not used by the chip */<br>
+       uint32_t        workaround[2];                          /* use this space to work around the 8byte problem (is this real?) */<br>
+       void            *u_buf;                                         /* user buffer */<br>
+       volatile struct mveth_tx_desc *next;    /* next descriptor (CPU address; next_desc_ptr is a DMA address)   */<br>
+} __attribute__(( aligned(RING_ALIGNMENT) )) MvEthTxDescRec, *MvEthTxDesc;<br>
+<br>
+/* Assume there are never more then 64k aliasing entries */<br>
+typedef uint16_t Mc_Refcnt[MV643XX_ETH_NUM_MCAST_ENTRIES*4];<br>
+<br>
+/* driver private data and bsdnet interface structure */<br>
+struct mveth_private {<br>
+       MvEthRxDesc             rx_ring;                                        /* pointers to aligned ring area             */<br>
+       MvEthTxDesc             tx_ring;                                        /* pointers to aligned ring area             */<br>
+       MvEthRxDesc             ring_area;                                      /* allocated ring area                       */<br>
+       int                             rbuf_count, xbuf_count;         /* saved ring sizes from ifconfig            */<br>
+       int                             port_num;<br>
+       int                             phy;<br>
+       MvEthRxDesc             d_rx_t;                                         /* tail of the RX ring; next received packet */<br>
+       MvEthTxDesc             d_tx_t, d_tx_h;                         <br>
+       uint32_t                rx_desc_dma, tx_desc_dma;       /* ring address as seen by DMA; (1:1 on this BSP) */<br>
+       int                             avail;<br>
+       void            (*isr)(void*);<br>
+       void            *isr_arg;<br>
+       /* Callbacks to handle buffers */<br>
+       void                    (*cleanup_txbuf)(void*, void*, int);    /* callback to cleanup TX buffer */<br>
+       void                    *cleanup_txbuf_arg;<br>
+       void                    *(*alloc_rxbuf)(int *psize, uintptr_t *paddr);  /* allocate RX buffer  */<br>
+       void                    (*consume_rxbuf)(void*, void*,  int);   /* callback to consume RX buffer */<br>
+       void                    *consume_rxbuf_arg;<br>
+       rtems_id        tid;<br>
+       uint32_t                irq_mask;                                       /* IRQs we use                              */<br>
+       uint32_t                xirq_mask;<br>
+    int             promisc;<br>
+       struct          {<br>
+               unsigned                irqs;<br>
+               unsigned                maxchain;<br>
+               unsigned                repack;<br>
+               unsigned                packet;<br>
+               unsigned                odrops;                                 /* no counter in core code                   */<br>
+               struct {<br>
+                       uint64_t        good_octs_rcvd;         /* 64-bit */<br>
+                       uint32_t        bad_octs_rcvd;<br>
+                       uint32_t        internal_mac_tx_err;<br>
+                       uint32_t        good_frames_rcvd;<br>
+                       uint32_t        bad_frames_rcvd;<br>
+                       uint32_t        bcast_frames_rcvd;<br>
+                       uint32_t        mcast_frames_rcvd;<br>
+                       uint32_t        frames_64_octs;<br>
+                       uint32_t        frames_65_127_octs;<br>
+                       uint32_t        frames_128_255_octs;<br>
+                       uint32_t        frames_256_511_octs;<br>
+                       uint32_t        frames_512_1023_octs;<br>
+                       uint32_t        frames_1024_max_octs;<br>
+                       uint64_t        good_octs_sent;         /* 64-bit */<br>
+                       uint32_t        good_frames_sent;<br>
+                       uint32_t        excessive_coll;<br>
+                       uint32_t        mcast_frames_sent;<br>
+                       uint32_t        bcast_frames_sent;<br>
+                       uint32_t        unrec_mac_ctrl_rcvd;<br>
+                       uint32_t        fc_sent;<br>
+                       uint32_t        good_fc_rcvd;<br>
+                       uint32_t        bad_fc_rcvd;<br>
+                       uint32_t        undersize_rcvd;<br>
+                       uint32_t        fragments_rcvd;<br>
+                       uint32_t        oversize_rcvd;<br>
+                       uint32_t        jabber_rcvd;<br>
+                       uint32_t        mac_rx_err;<br>
+                       uint32_t        bad_crc_event;<br>
+                       uint32_t        coll;<br>
+                       uint32_t        late_coll;<br>
+               } mib;<br>
+       }                       stats;<br>
+       struct {<br>
+               Mc_Refcnt       specl, other;<br>
+       }           mc_refcnt;<br>
+};<br>
+<br>
+/* stuff needed for bsdnet support */<br>
+struct mveth_bsdsupp {<br>
+       int                             oif_flags;                                      /* old / cached if_flags */<br>
+};<br>
+<br>
+struct mveth_softc {<br>
+       struct arpcom                   arpcom;<br>
+       struct mveth_bsdsupp    bsd;<br>
+       struct mveth_private    pvt;<br>
+};<br>
+<br>
+/* GLOBAL VARIABLES */<br>
+#ifdef MVETH_DEBUG_TX_DUMP<br>
+int mveth_tx_dump = 0;<br>
+#endif<br>
+<br>
+/* THE array of driver/bsdnet structs */<br>
+<br>
+/* If detaching/module unloading is enabled, the main driver data<br>
+ * structure must remain in memory; hence it must reside in its own<br>
+ * 'dummy' module...<br>
+ */<br>
+#ifdef  MVETH_DETACH_HACK<br>
+extern<br>
+#else<br>
+STATIC<br>
+#endif<br>
+struct mveth_softc theMvEths[MV643XXETH_NUM_DRIVER_SLOTS]<br>
+#ifndef MVETH_DETACH_HACK<br>
+= {{{{0}},}}<br>
+#endif<br>
+;<br>
+<br>
+/* daemon task id */<br>
+STATIC rtems_id        mveth_tid = 0;<br>
+/* register access protection mutex */<br>
+STATIC rtems_id mveth_mtx = 0;<br>
+#define REGLOCK()      do { \<br>
+               if ( RTEMS_SUCCESSFUL != rtems_semaphore_obtain(mveth_mtx, RTEMS_WAIT, RTEMS_NO_TIMEOUT) ) \<br>
+                       rtems_panic(DRVNAME": unable to lock register protection mutex"); \<br>
+               } while (0)<br>
+#define REGUNLOCK()    rtems_semaphore_release(mveth_mtx)<br>
+<br>
+/* Format strings for statistics messages */<br>
+static const char *mibfmt[] = {<br>
+       "  GOOD_OCTS_RCVD:      %"PRIu64"\n",<br>
+       0,<br>
+       "  BAD_OCTS_RCVD:       %"PRIu32"\n",<br>
+       "  INTERNAL_MAC_TX_ERR: %"PRIu32"\n",<br>
+       "  GOOD_FRAMES_RCVD:    %"PRIu32"\n",<br>
+       "  BAD_FRAMES_RCVD:     %"PRIu32"\n",<br>
+       "  BCAST_FRAMES_RCVD:   %"PRIu32"\n",<br>
+       "  MCAST_FRAMES_RCVD:   %"PRIu32"\n",<br>
+       "  FRAMES_64_OCTS:      %"PRIu32"\n",<br>
+       "  FRAMES_65_127_OCTS:  %"PRIu32"\n",<br>
+       "  FRAMES_128_255_OCTS: %"PRIu32"\n",<br>
+       "  FRAMES_256_511_OCTS: %"PRIu32"\n",<br>
+       "  FRAMES_512_1023_OCTS:%"PRIu32"\n",<br>
+       "  FRAMES_1024_MAX_OCTS:%"PRIu32"\n",<br>
+       "  GOOD_OCTS_SENT:      %"PRIu64"\n",<br>
+       0,<br>
+       "  GOOD_FRAMES_SENT:    %"PRIu32"\n",<br>
+       "  EXCESSIVE_COLL:      %"PRIu32"\n",<br>
+       "  MCAST_FRAMES_SENT:   %"PRIu32"\n",<br>
+       "  BCAST_FRAMES_SENT:   %"PRIu32"\n",<br>
+       "  UNREC_MAC_CTRL_RCVD: %"PRIu32"\n",<br>
+       "  FC_SENT:             %"PRIu32"\n",<br>
+       "  GOOD_FC_RCVD:        %"PRIu32"\n",<br>
+       "  BAD_FC_RCVD:         %"PRIu32"\n",<br>
+       "  UNDERSIZE_RCVD:      %"PRIu32"\n",<br>
+       "  FRAGMENTS_RCVD:      %"PRIu32"\n",<br>
+       "  OVERSIZE_RCVD:       %"PRIu32"\n",<br>
+       "  JABBER_RCVD:         %"PRIu32"\n",<br>
+       "  MAC_RX_ERR:          %"PRIu32"\n",<br>
+       "  BAD_CRC_EVENT:       %"PRIu32"\n",<br>
+       "  COLL:                %"PRIu32"\n",<br>
+       "  LATE_COLL:           %"PRIu32"\n",<br>
+};<br>
+<br>
+/* Interrupt Handler Connection */<br>
+<br>
+/* forward decls + implementation for IRQ API funcs */<br>
+<br>
+static void mveth_isr(rtems_irq_hdl_param unit);<br>
+static void mveth_isr_1(rtems_irq_hdl_param unit);<br>
+static void noop(const rtems_irq_connect_data *unused)  {}<br>
+static int  noop1(const rtems_irq_connect_data *unused) { return 0; }<br>
+<br>
+static rtems_irq_connect_data irq_data[MAX_NUM_SLOTS] = {<br>
+       {<br>
+               BSP_IRQ_ETH0,<br>
+               0,<br>
+               (rtems_irq_hdl_param)0,<br>
+               noop,<br>
+               noop,<br>
+               noop1<br>
+       },<br>
+       {<br>
+               BSP_IRQ_ETH1,<br>
+               0,<br>
+               (rtems_irq_hdl_param)1,<br>
+               noop,<br>
+               noop,<br>
+               noop1<br>
+       },<br>
+       {<br>
+               BSP_IRQ_ETH2,<br>
+               0,<br>
+               (rtems_irq_hdl_param)2,<br>
+               noop,<br>
+               noop,<br>
+               noop1<br>
+       },<br>
+};<br>
+<br>
+/* MII Ioctl Interface */<br>
+<br>
+STATIC unsigned<br>
+mveth_mii_read(struct mveth_private *mp, unsigned addr);<br>
+<br>
+STATIC unsigned<br>
+mveth_mii_write(struct mveth_private *mp, unsigned addr, unsigned v);<br>
+<br>
+<br>
+/* mdio / mii interface wrappers for rtems_mii_ioctl API */<br>
+<br>
+static int mveth_mdio_r(int phy, void *uarg, unsigned reg, uint32_t *pval)<br>
+{<br>
+       if ( phy > 1 )<br>
+               return -1;<br>
+<br>
+       *pval = mveth_mii_read(uarg, reg);<br>
+       return 0;<br>
+}<br>
+<br>
+static int mveth_mdio_w(int phy, void *uarg, unsigned reg, uint32_t val)<br>
+{<br>
+       if ( phy > 1 )<br>
+               return -1;<br>
+       mveth_mii_write(uarg, reg, val);<br>
+       return 0;<br>
+}<br>
+<br>
+static struct rtems_mdio_info mveth_mdio = {<br>
+       mdio_r:   mveth_mdio_r,<br>
+       mdio_w:   mveth_mdio_w,<br>
+       has_gmii: 1,<br>
+};<br>
+<br>
+/* LOW LEVEL SUPPORT ROUTINES */<br>
+<br>
+/* Software Cache Coherency */<br>
+#ifndef ENABLE_HW_SNOOPING<br>
+#ifndef __PPC__<br>
+#error "Software cache coherency maintenance is not implemented for your CPU architecture"<br>
+#endif<br>
+<br>
+static inline unsigned INVAL_DESC(volatile void *d)<br>
+{<br>
+typedef const char cache_line[PPC_CACHE_ALIGNMENT];<br>
+       asm volatile("dcbi 0, %1":"=m"(*(cache_line*)d):"r"(d));<br>
+       return (unsigned)d;     /* so this can be used in comma expression */<br>
+}<br>
+<br>
+static inline void FLUSH_DESC(volatile void *d)<br>
+{<br>
+typedef const char cache_line[PPC_CACHE_ALIGNMENT];<br>
+       asm volatile("dcbf 0, %0"::"r"(d),"m"(*(cache_line*)d));<br>
+}<br>
+<br>
+static inline void FLUSH_BARRIER(void)<br>
+{<br>
+       asm volatile("eieio");<br>
+}<br>
+<br>
+/* RX buffers are always cache-line aligned<br>
+ * ASSUMPTIONS:<br>
+ *   - 'addr' is cache aligned<br>
+ *   -  len   is a multiple >0 of cache lines<br>
+ */<br>
+static inline void INVAL_BUF(register uintptr_t addr, register int len)<br>
+{<br>
+typedef char maxbuf[2048]; /* more than an ethernet packet */<br>
+       do {<br>
+               len -= RX_BUF_ALIGNMENT;<br>
+               asm volatile("dcbi %0, %1"::"b"(addr),"r"(len));<br>
+       } while (len > 0);<br>
+       asm volatile("":"=m"(*(maxbuf*)addr));<br>
+}<br>
+<br>
+/* Flushing TX buffers is a little bit trickier; we don't really know their<br>
+ * alignment but *assume* adjacent addresses are covering 'ordinary' memory<br>
+ * so that flushing them does no harm!<br>
+ */<br>
+static inline void FLUSH_BUF(register uintptr_t addr, register int len)<br>
+{<br>
+       asm volatile("":::"memory");<br>
+       len = MV643XX_ALIGN(len, RX_BUF_ALIGNMENT);<br>
+       do { <br>
+               asm volatile("dcbf %0, %1"::"b"(addr),"r"(len));<br>
+               len -= RX_BUF_ALIGNMENT;<br>
+       } while ( len >= 0 );<br>
+}<br>
+<br>
+#else /* hardware snooping enabled */<br>
+<br>
+/* inline this to silence compiler warnings */<br>
+static inline int INVAL_DESC(volatile void *d)<br>
+{ return 0; }<br>
+<br>
+#define FLUSH_DESC(d)  NOOP()<br>
+#define INVAL_BUF(b,l) NOOP()<br>
+#define FLUSH_BUF(b,l) NOOP()<br>
+#define FLUSH_BARRIER()        NOOP()<br>
+<br>
+#endif /* cache coherency support */<br>
+<br>
+/* Synchronize memory access */<br>
+#ifdef __PPC__<br>
+static inline void membarrier(void)<br>
+{<br>
+       asm volatile("sync":::"memory");<br>
+}<br>
+#else<br>
+#error "memory barrier instruction not defined (yet) for this CPU"<br>
+#endif<br>
+<br>
+/* Enable and disable interrupts at the device */<br>
+static inline void<br>
+mveth_enable_irqs(struct mveth_private *mp, uint32_t mask)<br>
+{<br>
+rtems_interrupt_level l;<br>
+uint32_t val;<br>
+       rtems_interrupt_disable(l);<br>
+<br>
+       val  = MV_READ(MV643XX_ETH_INTERRUPT_ENBL_R(mp->port_num));<br>
+       val  = (val | mask | MV643XX_ETH_IRQ_EXT_ENA) & mp->irq_mask;<br>
+<br>
+       MV_WRITE(MV643XX_ETH_INTERRUPT_ENBL_R(mp->port_num),        val);<br>
+<br>
+       val  = MV_READ(MV643XX_ETH_INTERRUPT_EXTEND_ENBL_R(mp->port_num));<br>
+       val  = (val | mask) & mp->xirq_mask;<br>
+       MV_WRITE(MV643XX_ETH_INTERRUPT_EXTEND_ENBL_R(mp->port_num), val);<br>
+<br>
+       rtems_interrupt_enable(l);<br>
+}<br>
+<br>
+static inline uint32_t<br>
+mveth_disable_irqs(struct mveth_private *mp, uint32_t mask)<br>
+{<br>
+rtems_interrupt_level l;<br>
+uint32_t val,xval,tmp;<br>
+       rtems_interrupt_disable(l);<br>
+<br>
+       val  = MV_READ(MV643XX_ETH_INTERRUPT_ENBL_R(mp->port_num));<br>
+       tmp  = ( (val & ~mask) | MV643XX_ETH_IRQ_EXT_ENA ) & mp->irq_mask;<br>
+       MV_WRITE(MV643XX_ETH_INTERRUPT_ENBL_R(mp->port_num),        tmp);<br>
+<br>
+       xval = MV_READ(MV643XX_ETH_INTERRUPT_EXTEND_ENBL_R(mp->port_num));<br>
+       tmp  = (xval & ~mask) & mp->xirq_mask;<br>
+       MV_WRITE(MV643XX_ETH_INTERRUPT_EXTEND_ENBL_R(mp->port_num), tmp);<br>
+<br>
+       rtems_interrupt_enable(l);<br>
+<br>
+       return (val | xval);<br>
+}<br>
+<br>
+/* This should be safe even w/o turning off interrupts if multiple<br>
+ * threads ack different bits in the cause register (and ignore<br>
+ * other ones) since writing 'ones' into the cause register doesn't<br>
+ * 'stick'.<br>
+ */<br>
+<br>
+static inline uint32_t<br>
+mveth_ack_irqs(struct mveth_private *mp, uint32_t mask)<br>
+{<br>
+register uint32_t x,xe,p;<br>
+<br>
+               p  = mp->port_num;<br>
+               /* Get cause */<br>
+               x  = MV_READ(MV643XX_ETH_INTERRUPT_CAUSE_R(p));<br>
+<br>
+               /* Ack interrupts filtering the ones we're interested in */<br>
+<br>
+               /* Note: EXT_IRQ bit clears by itself if EXT interrupts are cleared */<br>
+               MV_WRITE(MV643XX_ETH_INTERRUPT_CAUSE_R(p), ~ (x & mp->irq_mask & mask));<br>
+<br>
+                               /* linux driver tests 1<<1 as a summary bit for extended interrupts;<br>
+                                * the mv64360 seems to use 1<<19 for that purpose; for the moment,<br>
+                                * I just check both.<br>
+                                * Update: link status irq (1<<16 in xe) doesn't set (1<<19) in x!<br>
+                                */<br>
+               if ( 1 /* x & 2 */ )<br>
+               {<br>
+                       xe = MV_READ(MV643XX_ETH_INTERRUPT_EXTEND_CAUSE_R(p));<br>
+<br>
+                       MV_WRITE(MV643XX_ETH_INTERRUPT_EXTEND_CAUSE_R(p), ~ (xe & mp->xirq_mask & mask));<br>
+               } else {<br>
+                       xe = 0;<br>
+               }<br>
+#ifdef MVETH_TESTING<br>
+               if (    ((x & MV643XX_ETH_ALL_IRQS) & ~MV643XX_ETH_KNOWN_IRQS)<br>
+                        || ((xe & MV643XX_ETH_ALL_EXT_IRQS) & ~MV643XX_ETH_KNOWN_EXT_IRQS) ) {<br>
+                       fprintf(stderr, "Unknown IRQs detected; leaving all disabled for debugging:\n");<br>
+                       fprintf(stderr, "Cause reg was 0x%08x, ext cause 0x%08x\n", x, xe);<br>
+                       mp->irq_mask  = 0;<br>
+                       mp->xirq_mask = 0;<br>
+               }<br>
+#endif<br>
+               /* luckily, the extended and 'normal' interrupts we use don't overlap so<br>
+                * we can just OR them into a single word<br>
+                */<br>
+               return  (xe & mp->xirq_mask) | (x & mp->irq_mask);<br>
+}<br>
+<br>
+static void mveth_isr(rtems_irq_hdl_param arg)<br>
+{<br>
+unsigned unit = (unsigned)arg;<br>
+       mveth_disable_irqs(&theMvEths[unit].pvt, -1);<br>
+       theMvEths[unit].pvt.stats.irqs++;<br>
+       rtems_bsdnet_event_send( theMvEths[unit].pvt.tid, 1<<unit );<br>
+}<br>
+<br>
+static void mveth_isr_1(rtems_irq_hdl_param arg)<br>
+{<br>
+unsigned              unit = (unsigned)arg;<br>
+struct mveth_private *mp   = &theMvEths[unit].pvt;<br>
+<br>
+       mp->stats.irqs++;<br>
+       mp->isr(mp->isr_arg);<br>
+}<br>
+<br>
+static void<br>
+mveth_clear_mib_counters(struct mveth_private *mp)<br>
+{<br>
+register int           i;<br>
+register uint32_t      b;<br>
+       /* reading the counters resets them */<br>
+       b = MV643XX_ETH_MIB_COUNTERS(mp->port_num);<br>
+       for (i=0; i< MV643XX_ETH_NUM_MIB_COUNTERS; i++, b+=4)<br>
+               (void)MV_READ(b);<br>
+}<br>
+<br>
+/* Reading a MIB register also clears it. Hence we read the lo<br>
+ * register first, then the hi one. Correct reading is guaranteed since<br>
+ * the 'lo' register cannot overflow after it is read since it had<br>
+ * been reset to 0.<br>
+ */<br>
+static unsigned long long<br>
+read_long_mib_counter(int port_num, int idx)<br>
+{<br>
+unsigned long lo;<br>
+unsigned long long hi;<br>
+       lo = MV_READ(MV643XX_ETH_MIB_COUNTERS(port_num)+(idx<<2));<br>
+       idx++;<br>
+       hi = MV_READ(MV643XX_ETH_MIB_COUNTERS(port_num)+(idx<<2));<br>
+       return (hi<<32) | lo;<br>
+}<br>
+<br>
+static inline unsigned long<br>
+read_mib_counter(int port_num, int idx)<br>
+{<br>
+       return MV_READ(MV643XX_ETH_MIB_COUNTERS(port_num)+(idx<<2));<br>
+}<br>
+<br>
+<br>
+/* write ethernet address from buffer to hardware (need to change unicast filter after this) */<br>
+static void<br>
+mveth_write_eaddr(struct mveth_private *mp, unsigned char *eaddr)<br>
+{<br>
+int                    i;<br>
+uint32_t       x;<br>
+<br>
+       /* build hi word */<br>
+       for (i=4,x=0; i; i--, eaddr++) {<br>
+               x = (x<<8) | *eaddr;<br>
+       }<br>
+       MV_WRITE(MV643XX_ETH_MAC_ADDR_HI(mp->port_num), x);<br>
+<br>
+       /* build lo word */<br>
+       for (i=2,x=0; i; i--, eaddr++) {<br>
+               x = (x<<8) | *eaddr;<br>
+       }<br>
+       MV_WRITE(MV643XX_ETH_MAC_ADDR_LO(mp->port_num), x);<br>
+}<br>
+<br>
+/* PHY/MII Interface<br>
+ *<br>
+ * Read/write a PHY register;<br>
+ * <br>
+ * NOTE: The SMI register is shared among the three devices.<br>
+ *       Protection is provided by the global networking semaphore.<br>
+ *       If non-bsd drivers are running on a subset of IFs proper<br>
+ *       locking of all shared registers must be implemented!<br>
+ */<br>
+STATIC unsigned<br>
+mveth_mii_read(struct mveth_private *mp, unsigned addr)<br>
+{<br>
+unsigned v;<br>
+unsigned wc = 0;<br>
+<br>
+       addr  &= 0x1f;<br>
+<br>
+       /* wait until not busy */<br>
+       do {<br>
+               v = MV_READ(MV643XX_ETH_SMI_R);<br>
+               wc++;<br>
+       } while ( MV643XX_ETH_SMI_BUSY & v );<br>
+<br>
+       MV_WRITE(MV643XX_ETH_SMI_R, (addr <<21 ) | (mp->phy<<16) | MV643XX_ETH_SMI_OP_RD );<br>
+<br>
+       do {<br>
+               v = MV_READ(MV643XX_ETH_SMI_R);<br>
+               wc++;<br>
+       } while ( MV643XX_ETH_SMI_BUSY & v );<br>
+<br>
+       if (wc>0xffff)<br>
+               wc = 0xffff;<br>
+       return (wc<<16) | (v & 0xffff);<br>
+}<br>
+<br>
+STATIC unsigned<br>
+mveth_mii_write(struct mveth_private *mp, unsigned addr, unsigned v)<br>
+{<br>
+unsigned wc = 0;<br>
+<br>
+       addr  &= 0x1f;<br>
+       v     &= 0xffff;<br>
+<br>
+       /* busywait is ugly but not preventing ISRs or high priority tasks from<br>
+        * preempting us<br>
+        */<br>
+<br>
+       /* wait until not busy */<br>
+       while ( MV643XX_ETH_SMI_BUSY & MV_READ(MV643XX_ETH_SMI_R) )<br>
+               wc++ /* wait */;<br>
+<br>
+       MV_WRITE(MV643XX_ETH_SMI_R, (addr <<21 ) | (mp->phy<<16) | MV643XX_ETH_SMI_OP_WR | v );<br>
+<br>
+       return wc;<br>
+}<br>
+<br>
+/* MID-LAYER SUPPORT ROUTINES */<br>
+<br>
+/* Start TX if descriptors are exhausted */<br>
+static __inline__ void<br>
+mveth_start_tx(struct mveth_private *mp)<br>
+{<br>
+uint32_t running;<br>
+       if ( mp->avail <= 0 ) {<br>
+               running = MV_READ(MV643XX_ETH_TRANSMIT_QUEUE_COMMAND_R(mp->port_num));<br>
+               if ( ! (running & MV643XX_ETH_TX_START(0)) ) {<br>
+                       MV_WRITE(MV643XX_ETH_TRANSMIT_QUEUE_COMMAND_R(mp->port_num), MV643XX_ETH_TX_START(0));<br>
+               }<br>
+       }<br>
+}<br>
+<br>
+/* Stop TX and wait for the command queues to stop and the fifo to drain */<br>
+static uint32_t<br>
+mveth_stop_tx(int port)<br>
+{<br>
+uint32_t active_q;<br>
+<br>
+       active_q = (MV_READ(MV643XX_ETH_TRANSMIT_QUEUE_COMMAND_R(port)) & MV643XX_ETH_TX_ANY_RUNNING);<br>
+<br>
+       if ( active_q ) {<br>
+               /* Halt TX and wait for activity to stop */<br>
+               MV_WRITE(MV643XX_ETH_TRANSMIT_QUEUE_COMMAND_R(port), MV643XX_ETH_TX_STOP_ALL);<br>
+               while ( MV643XX_ETH_TX_ANY_RUNNING & MV_READ(MV643XX_ETH_TRANSMIT_QUEUE_COMMAND_R(port)) )<br>
+                       /* poll-wait */;<br>
+               /* Wait for Tx FIFO to drain */<br>
+               while ( ! (MV643XX_ETH_PORT_STATUS_R(port) & MV643XX_ETH_PORT_STATUS_TX_FIFO_EMPTY) )<br>
+                       /* poll-wait */;<br>
+       }<br>
+<br>
+       return active_q;<br>
+}<br>
+<br>
+/* update serial port settings from current link status */<br>
+static void<br>
+mveth_update_serial_port(struct mveth_private *mp, int media)<br>
+{<br>
+int port = mp->port_num;<br>
+uint32_t old, new;<br>
+<br>
+       new = old = MV_READ(MV643XX_ETH_SERIAL_CONTROL_R(port));<br>
+<br>
+       /* mask speed and duplex settings */<br>
+       new &= ~(  MV643XX_ETH_SET_GMII_SPEED_1000<br>
+                        | MV643XX_ETH_SET_MII_SPEED_100<br>
+                        | MV643XX_ETH_SET_FULL_DUPLEX );<br>
+<br>
+       if ( IFM_FDX & media )<br>
+               new |= MV643XX_ETH_SET_FULL_DUPLEX;<br>
+<br>
+       switch ( IFM_SUBTYPE(media) ) {<br>
+               default: /* treat as 10 */<br>
+                       break;<br>
+               case IFM_100_TX:<br>
+                       new |= MV643XX_ETH_SET_MII_SPEED_100;<br>
+                       break;<br>
+               case IFM_1000_T:<br>
+                       new |= MV643XX_ETH_SET_GMII_SPEED_1000;<br>
+                       break;<br>
+       }<br>
+<br>
+       if ( new != old ) {<br>
+               if ( ! (MV643XX_ETH_SERIAL_PORT_ENBL & new) ) {<br>
+                       /* just write */<br>
+                       MV_WRITE(MV643XX_ETH_SERIAL_CONTROL_R(port), new);<br>
+               } else {<br>
+                       uint32_t were_running;<br>
+<br>
+                       were_running = mveth_stop_tx(port);<br>
+<br>
+                       old &= ~MV643XX_ETH_SERIAL_PORT_ENBL;<br>
+                       MV_WRITE(MV643XX_ETH_SERIAL_CONTROL_R(port), old);<br>
+                       MV_WRITE(MV643XX_ETH_SERIAL_CONTROL_R(port), new);<br>
+                       /* linux driver writes twice... */<br>
+                       MV_WRITE(MV643XX_ETH_SERIAL_CONTROL_R(port), new);<br>
+<br>
+                       if ( were_running ) {<br>
+                               MV_WRITE(MV643XX_ETH_TRANSMIT_QUEUE_COMMAND_R(mp->port_num), MV643XX_ETH_TX_START(0));<br>
+                       }<br>
+               }<br>
+       }<br>
+}<br>
+<br>
+/* Clear multicast filters                        */<br>
+void<br>
+BSP_mve_mcast_filter_clear(struct mveth_private *mp)<br>
+{<br>
+int                 i;<br>
+register uint32_t      s,o;<br>
+uint32_t            v = mp->promisc ? 0x01010101 : 0x00000000;<br>
+       s = MV643XX_ETH_DA_FILTER_SPECL_MCAST_TBL(mp->port_num);<br>
+       o = MV643XX_ETH_DA_FILTER_OTHER_MCAST_TBL(mp->port_num);<br>
+       for (i=0; i<MV643XX_ETH_NUM_MCAST_ENTRIES; i++) {<br>
+               MV_WRITE(s,v);<br>
+               MV_WRITE(o,v);<br>
+               s+=4;<br>
+               o+=4;<br>
+       }<br>
+       for (i=0; i<sizeof(mp->mc_refcnt.specl)/sizeof(mp->mc_refcnt.specl[0]); i++) {<br>
+               mp->mc_refcnt.specl[i] = 0;<br>
+               mp->mc_refcnt.other[i] = 0;<br>
+       }<br>
+}<br>
+<br>
+void<br>
+BSP_mve_mcast_filter_accept_all(struct mveth_private *mp)<br>
+{<br>
+int                 i;<br>
+register uint32_t      s,o;<br>
+       s = MV643XX_ETH_DA_FILTER_SPECL_MCAST_TBL(mp->port_num);<br>
+       o = MV643XX_ETH_DA_FILTER_OTHER_MCAST_TBL(mp->port_num);<br>
+       for (i=0; i<MV643XX_ETH_NUM_MCAST_ENTRIES; i++) {<br>
+               MV_WRITE(s,0x01010101);<br>
+               MV_WRITE(o,0x01010101);<br>
+               s+=4;<br>
+               o+=4;<br>
+               /* Not clear what we should do with the reference count.<br>
+                * For now just increment it.<br>
+                */<br>
+               for (i=0; i<sizeof(mp->mc_refcnt.specl)/sizeof(mp->mc_refcnt.specl[0]); i++) {<br>
+                       mp->mc_refcnt.specl[i]++;<br>
+                       mp->mc_refcnt.other[i]++;<br>
+               }<br>
+       }<br>
+}<br>
+<br>
+static void add_entry(uint32_t off, uint8_t hash, Mc_Refcnt *refcnt)<br>
+{<br>
+uint32_t val;<br>
+uint32_t slot = hash & 0xfc;<br>
+<br>
+       if ( 0 == (*refcnt)[hash]++ ) {<br>
+               val = MV_READ(off+slot) | ( 1 << ((hash&3)<<3) );<br>
+               MV_WRITE(off+slot, val);<br>
+       }<br>
+}<br>
+<br>
+static void del_entry(uint32_t off, uint8_t hash, Mc_Refcnt *refcnt)<br>
+{<br>
+uint32_t val;<br>
+uint32_t slot = hash & 0xfc;<br>
+<br>
+       if ( (*refcnt)[hash] > 0 && 0 == --(*refcnt)[hash] ) {<br>
+               val = MV_READ(off+slot) & ~( 1 << ((hash&3)<<3) );<br>
+               MV_WRITE(off+slot, val);<br>
+       }<br>
+}<br>
+<br>
+void<br>
+BSP_mve_mcast_filter_accept_add(struct mveth_private *mp, unsigned char *enaddr)<br>
+{<br>
+uint32_t   hash;<br>
+static const char spec[]={0x01,0x00,0x5e,0x00,0x00};<br>
+static const char bcst[]={0xff,0xff,0xff,0xff,0xff,0xff};<br>
+uint32_t   tabl;<br>
+Mc_Refcnt  *refcnt;<br>
+<br>
+       if ( ! (0x01 & enaddr[0]) ) {<br>
+               /* not a multicast address; ignore */<br>
+               return;<br>
+       }<br>
+<br>
+       if ( 0 == memcmp(enaddr, bcst, sizeof(bcst)) ) {<br>
+               /* broadcast address; ignore */<br>
+               return;<br>
+       }<br>
+<br>
+       if ( 0 == memcmp(enaddr, spec, sizeof(spec)) ) {<br>
+               hash   = enaddr[5];<br>
+               tabl   = MV643XX_ETH_DA_FILTER_SPECL_MCAST_TBL(mp->port_num);<br>
+               refcnt = &mp->mc_refcnt.specl;<br>
+       } else {<br>
+               uint32_t test, mask;<br>
+               int      i;<br>
+               /* algorithm used by linux driver */<br>
+               for ( hash=0, i=0; i<6; i++ ) {<br>
+                       hash = (hash ^ enaddr[i]) << 8;<br>
+                       for ( test=0x8000, mask=0x8380; test>0x0080; test>>=1, mask>>=1 ) {<br>
+                               if ( hash & test )<br>
+                                       hash ^= mask;<br>
+                       }<br>
+               }<br>
+               tabl   = MV643XX_ETH_DA_FILTER_OTHER_MCAST_TBL(mp->port_num);<br>
+               refcnt = &mp->mc_refcnt.other;<br>
+       }<br>
+       add_entry(tabl, hash, refcnt);<br>
+}<br>
+<br>
+void<br>
+BSP_mve_mcast_filter_accept_del(struct mveth_private *mp, unsigned char *enaddr)<br>
+{<br>
+uint32_t   hash;<br>
+static const char spec[]={0x01,0x00,0x5e,0x00,0x00};<br>
+static const char bcst[]={0xff,0xff,0xff,0xff,0xff,0xff};<br>
+uint32_t   tabl;<br>
+Mc_Refcnt  *refcnt;<br>
+<br>
+       if ( ! (0x01 & enaddr[0]) ) {<br>
+               /* not a multicast address; ignore */<br>
+               return;<br>
+       }<br>
+<br>
+       if ( 0 == memcmp(enaddr, bcst, sizeof(bcst)) ) {<br>
+               /* broadcast address; ignore */<br>
+               return;<br>
+       }<br>
+<br>
+       if ( 0 == memcmp(enaddr, spec, sizeof(spec)) ) {<br>
+               hash   = enaddr[5];<br>
+               tabl   = MV643XX_ETH_DA_FILTER_SPECL_MCAST_TBL(mp->port_num);<br>
+               refcnt = &mp->mc_refcnt.specl;<br>
+       } else {<br>
+               uint32_t test, mask;<br>
+               int      i;<br>
+               /* algorithm used by linux driver */<br>
+               for ( hash=0, i=0; i<6; i++ ) {<br>
+                       hash = (hash ^ enaddr[i]) << 8;<br>
+                       for ( test=0x8000, mask=0x8380; test>0x0080; test>>=1, mask>>=1 ) {<br>
+                               if ( hash & test )<br>
+                                       hash ^= mask;<br>
+                       }<br>
+               }<br>
+               tabl   = MV643XX_ETH_DA_FILTER_OTHER_MCAST_TBL(mp->port_num);<br>
+               refcnt = &mp->mc_refcnt.other;<br>
+       }<br>
+       del_entry(tabl, hash, refcnt);<br>
+}<br>
+<br>
+/* Clear all address filters (multi- and unicast) */<br>
+static void<br>
+mveth_clear_addr_filters(struct mveth_private *mp)<br>
+{<br>
+register int      i;<br>
+register uint32_t u;<br>
+       u = MV643XX_ETH_DA_FILTER_UNICAST_TBL(mp->port_num);<br>
+       for (i=0; i<MV643XX_ETH_NUM_UNICAST_ENTRIES; i++) {<br>
+               MV_WRITE(u,0);<br>
+               u+=4;<br>
+       }<br>
+       BSP_mve_mcast_filter_clear(mp);<br>
+}<br>
+<br>
+/* Setup unicast filter for a given MAC address (least significant nibble) */<br>
+static void<br>
+mveth_ucfilter(struct mveth_private *mp, unsigned char mac_lsbyte, int accept)<br>
+{<br>
+unsigned nib, slot, bit;<br>
+uint32_t       val;<br>
+       /* compute slot in table */<br>
+       nib  = mac_lsbyte & 0xf;        /* strip nibble     */<br>
+       slot = nib & ~3;                        /* (nibble/4)*4     */<br>
+       bit  = (nib &  3)<<3;           /*  8*(nibble % 4)  */<br>
+       val = MV_READ(MV643XX_ETH_DA_FILTER_UNICAST_TBL(mp->port_num) + slot);<br>
+       if ( accept ) {<br>
+               val |= 0x01 << bit;<br>
+       } else {<br>
+               val &= 0x0e << bit;<br>
+       }<br>
+       MV_WRITE(MV643XX_ETH_DA_FILTER_UNICAST_TBL(mp->port_num) + slot, val);<br>
+}<br>
+<br>
+#if defined( ENABLE_TX_WORKAROUND_8_BYTE_PROBLEM ) && 0<br>
+/* Currently unused; small unaligned buffers seem to be rare<br>
+ * so we just use memcpy()...<br>
+ */<br>
+<br>
+/* memcpy for 0..7 bytes; arranged so that gcc<br>
+ * optimizes for powerpc...<br>
+ */<br>
+<br>
+static inline void memcpy8(void *to, void *fr, unsigned x)<br>
+{<br>
+register uint8_t *d = to, *s = fro;<br>
+<br>
+       d+=l; s+=l;<br>
+       if ( l & 1 ) {<br>
+               *--d=*--s;<br>
+       }<br>
+       if ( l & 2 ) {<br>
+               /* pre-decrementing causes gcc to use auto-decrementing<br>
+                * PPC instructions (lhzu rx, -2(ry))<br>
+                */<br>
+               d-=2; s-=2;<br>
+               /* use memcpy; don't cast to short -- accessing<br>
+                * misaligned data as short is not portable<br>
+                * (but it works on PPC).<br>
+                */<br>
+               __builtin_memcpy(d,s,2);<br>
+       }<br>
+       if ( l & 4 ) {<br>
+               d-=4; s-=4;<br>
+               /* see above */<br>
+               __builtin_memcpy(d,s,4);<br>
+       }<br>
+}<br>
+#endif<br>
+<br>
+/* Assign values (buffer + user data) to a tx descriptor slot */<br>
+static int<br>
+mveth_assign_desc(MvEthTxDesc d, struct mbuf *m, unsigned long extra)<br>
+{<br>
+int rval = (d->byte_cnt = m->m_len);<br>
+<br>
+#ifdef MVETH_TESTING<br>
+       assert( !d->mb      );<br>
+       assert(  m->m_len   );<br>
+#endif<br>
+<br>
+       /* set CRC on all descriptors; seems to be necessary */<br>
+       d->cmd_sts  = extra | (TDESC_GEN_CRC | TDESC_ZERO_PAD);<br>
+<br>
+#ifdef ENABLE_TX_WORKAROUND_8_BYTE_PROBLEM<br>
+       /* The buffer must be 64bit aligned if the payload is <8 (??) */<br>
+       if ( rval < 8 && ((mtod(m, uintptr_t)) & 7) ) {<br>
+               d->buf_ptr = CPUADDR2ENET( d->workaround );<br>
+               memcpy((void*)d->workaround, mtod(m, void*), rval);<br>
+       } else<br>
+#endif<br>
+       {<br>
+               d->buf_ptr  = CPUADDR2ENET( mtod(m, unsigned long) );<br>
+       }<br>
+       d->l4i_chk  = 0;<br>
+       return rval;<br>
+}<br>
+<br>
+static int<br>
+mveth_assign_desc_raw(MvEthTxDesc d, void *buf, int len, unsigned long extra)<br>
+{<br>
+int rval = (d->byte_cnt = len);<br>
+<br>
+#ifdef MVETH_TESTING<br>
+       assert( !d->u_buf );<br>
+       assert(  len   );<br>
+#endif<br>
+<br>
+       /* set CRC on all descriptors; seems to be necessary */<br>
+       d->cmd_sts  = extra | (TDESC_GEN_CRC | TDESC_ZERO_PAD);<br>
+<br>
+#ifdef ENABLE_TX_WORKAROUND_8_BYTE_PROBLEM<br>
+       /* The buffer must be 64bit aligned if the payload is <8 (??) */<br>
+       if ( rval < 8 && ( ((uintptr_t)buf) & 7) ) {<br>
+               d->buf_ptr = CPUADDR2ENET( d->workaround );<br>
+               memcpy((void*)d->workaround, buf, rval);<br>
+       } else<br>
+#endif<br>
+       {<br>
+               d->buf_ptr  = CPUADDR2ENET( (unsigned long)buf );<br>
+       }<br>
+       d->l4i_chk  = 0;<br>
+       return rval;<br>
+}<br>
+<br>
+/*<br>
+ * Ring Initialization<br>
+ *<br>
+ * ENDIAN ASSUMPTION: DMA engine matches CPU endianness (???)<br>
+ *<br>
+ * Linux driver discriminates __LITTLE and __BIG endian for re-arranging<br>
+ * the u16 fields in the descriptor structs. However, no endian conversion<br>
+ * is done on the individual fields (SDMA byte swapping is disabled on LE).<br>
+ */<br>
+<br>
+STATIC int<br>
+mveth_init_rx_desc_ring(struct mveth_private *mp)<br>
+{<br>
+int i,sz;<br>
+MvEthRxDesc    d;<br>
+uintptr_t baddr;<br>
+<br>
+       memset((void*)mp->rx_ring, 0, sizeof(*mp->rx_ring)*mp->rbuf_count);<br>
+<br>
+       mp->rx_desc_dma = CPUADDR2ENET(mp->rx_ring);<br>
+<br>
+       for ( i=0, d = mp->rx_ring; i<mp->rbuf_count; i++, d++ ) {<br>
+               d->u_buf = mp->alloc_rxbuf(&sz, &baddr);<br>
+               assert( d->u_buf );<br>
+<br>
+#ifndef ENABLE_HW_SNOOPING<br>
+               /* could reduce the area to max. ethernet packet size */<br>
+               INVAL_BUF(baddr, sz);<br>
+#endif<br>
+<br>
+               d->buf_size = sz;<br>
+               d->byte_cnt = 0;<br>
+               d->cmd_sts  = RDESC_DMA_OWNED | RDESC_INT_ENA;<br>
+               d->next         = mp->rx_ring + (i+1) % mp->rbuf_count;<br>
+<br>
+               d->buf_ptr  = CPUADDR2ENET( baddr );<br>
+               d->next_desc_ptr = CPUADDR2ENET(d->next);<br>
+               FLUSH_DESC(d);<br>
+       }<br>
+       FLUSH_BARRIER();<br>
+<br>
+       mp->d_rx_t = mp->rx_ring;<br>
+<br>
+       /* point the chip to the start of the ring */<br>
+       MV_WRITE(MV643XX_ETH_RX_Q0_CURRENT_DESC_PTR(mp->port_num),mp->rx_desc_dma);<br>
+<br>
+<br>
+       return i;<br>
+}<br>
+<br>
+STATIC int<br>
+mveth_init_tx_desc_ring(struct mveth_private *mp)<br>
+{<br>
+int i;<br>
+MvEthTxDesc d;<br>
+<br>
+       memset((void*)mp->tx_ring, 0, sizeof(*mp->tx_ring)*mp->xbuf_count);<br>
+<br>
+       /* DMA and CPU live in the same address space (rtems) */<br>
+       mp->tx_desc_dma = CPUADDR2ENET(mp->tx_ring);<br>
+       mp->avail       = TX_AVAILABLE_RING_SIZE(mp);<br>
+<br>
+       for ( i=0, d=mp->tx_ring; i<mp->xbuf_count; i++,d++ ) {<br>
+               d->l4i_chk  = 0;<br>
+               d->byte_cnt = 0;<br>
+               d->cmd_sts  = 0;<br>
+               d->buf_ptr  = 0;<br>
+<br>
+               d->next     = mp->tx_ring + (i+1) % mp->xbuf_count;<br>
+               d->next_desc_ptr = CPUADDR2ENET(d->next);<br>
+               FLUSH_DESC(d);<br>
+       }<br>
+       FLUSH_BARRIER();<br>
+<br>
+       mp->d_tx_h = mp->d_tx_t = mp->tx_ring;<br>
+<br>
+       /* point the chip to the start of the ring */<br>
+       MV_WRITE(MV643XX_ETH_TX_Q0_CURRENT_DESC_PTR(mp->port_num),mp->tx_desc_dma);<br>
+<br>
+       return i;<br>
+}<br>
+<br>
+/* PUBLIC LOW-LEVEL DRIVER ACCESS */<br>
+<br>
+static struct mveth_private *<br>
+mve_setup_internal(<br>
+       int              unit,<br>
+       rtems_id tid,<br>
+       void     (*isr)(void*isr_arg),<br>
+       void     *isr_arg,<br>
+       void (*cleanup_txbuf)(void *user_buf, void *closure, int error_on_tx_occurred), <br>
+       void *cleanup_txbuf_arg,<br>
+       void *(*alloc_rxbuf)(int *p_size, uintptr_t *p_data_addr),<br>
+       void (*consume_rxbuf)(void *user_buf, void *closure, int len),<br>
+       void *consume_rxbuf_arg,<br>
+       int             rx_ring_size,<br>
+       int             tx_ring_size,<br>
+       int             irq_mask<br>
+)<br>
+<br>
+{<br>
+struct mveth_private *mp;<br>
+struct ifnet         *ifp;<br>
+int                  InstallISRSuccessful;<br>
+<br>
+       if ( unit <= 0 || unit > MV643XXETH_NUM_DRIVER_SLOTS ) {<br>
+               printk(DRVNAME": Bad unit number %i; must be 1..%i\n", unit, MV643XXETH_NUM_DRIVER_SLOTS);<br>
+               return 0;<br>
+       }<br>
+       ifp = &theMvEths[unit-1].arpcom.ac_if;<br>
+       if ( ifp->if_init ) {<br>
+               if ( ifp->if_init ) {<br>
+                       printk(DRVNAME": instance %i already attached.\n", unit);<br>
+                       return 0;<br>
+               }<br>
+       }<br>
+<br>
+       if ( rx_ring_size < 0 && tx_ring_size < 0 )<br>
+               return 0;<br>
+<br>
+       if ( MV_64360 != BSP_getDiscoveryVersion(0) ) {<br>
+               printk(DRVNAME": not mv64360 chip\n");<br>
+               return 0;<br>
+       }<br>
+<br>
+       /* lazy init of mutex (non thread-safe! - we assume 1st initialization is single-threaded) */<br>
+       if ( ! mveth_mtx ) {<br>
+               rtems_status_code sc;<br>
+               sc = rtems_semaphore_create(<br>
+                               rtems_build_name('m','v','e','X'),<br>
+                               1,<br>
+                               RTEMS_BINARY_SEMAPHORE | RTEMS_PRIORITY | RTEMS_INHERIT_PRIORITY | RTEMS_DEFAULT_ATTRIBUTES,<br>
+                               0,<br>
+                               &mveth_mtx);<br>
+               if ( RTEMS_SUCCESSFUL != sc ) {<br>
+                       rtems_error(sc,DRVNAME": creating mutex\n");<br>
+                       rtems_panic("unable to proceed\n");<br>
+               }<br>
+       }<br>
+<br>
+       mp = &theMvEths[unit-1].pvt;<br>
+<br>
+       memset(mp, 0, sizeof(*mp));<br>
+<br>
+       mp->port_num          = unit-1;<br>
+       mp->phy               = (MV_READ(MV643XX_ETH_PHY_ADDR_R) >> (5*mp->port_num)) & 0x1f;<br>
+<br>
+       mp->tid               = tid;<br>
+       mp->isr               = isr;<br>
+       mp->isr_arg           = isr_arg;<br>
+<br>
+       mp->cleanup_txbuf     = cleanup_txbuf;<br>
+       mp->cleanup_txbuf_arg = cleanup_txbuf_arg;<br>
+       mp->alloc_rxbuf       = alloc_rxbuf;<br>
+       mp->consume_rxbuf     = consume_rxbuf;<br>
+       mp->consume_rxbuf_arg = consume_rxbuf_arg;<br>
+<br>
+       mp->rbuf_count = rx_ring_size ? rx_ring_size : MV643XX_RX_RING_SIZE;<br>
+       mp->xbuf_count = tx_ring_size ? tx_ring_size : MV643XX_TX_RING_SIZE;<br>
+<br>
+       if ( mp->xbuf_count > 0 )<br>
+               mp->xbuf_count += TX_NUM_TAG_SLOTS;<br>
+<br>
+       if ( mp->rbuf_count < 0 )<br>
+               mp->rbuf_count = 0;<br>
+       if ( mp->xbuf_count < 0 )<br>
+               mp->xbuf_count = 0;<br>
+<br>
+       /* allocate ring area; add 1 entry -- room for alignment */<br>
+       assert( !mp->ring_area );<br>
+       mp->ring_area = malloc(<br>
+                                                       sizeof(*mp->ring_area) *<br>
+                                                               (mp->rbuf_count + mp->xbuf_count + 1),<br>
+                                                       M_DEVBUF,<br>
+                                                       M_WAIT );<br>
+       assert( mp->ring_area );<br>
+<br>
+       BSP_mve_stop_hw(mp);<br>
+<br>
+       if ( irq_mask ) {<br>
+               irq_data[mp->port_num].hdl = tid ? mveth_isr : mveth_isr_1;     <br>
+               InstallISRSuccessful = BSP_install_rtems_irq_handler( &irq_data[mp->port_num] );<br>
+               assert( InstallISRSuccessful );<br>
+       }<br>
+<br>
+       /* mark as used */<br>
+       ifp->if_init = (void*)(-1);<br>
+<br>
+       if ( rx_ring_size < 0 )<br>
+               irq_mask &= ~ MV643XX_ETH_IRQ_RX_DONE;<br>
+       if ( tx_ring_size < 0 )<br>
+               irq_mask &= ~ MV643XX_ETH_EXT_IRQ_TX_DONE;<br>
+<br>
+       mp->irq_mask = (irq_mask & MV643XX_ETH_IRQ_RX_DONE);<br>
+       if ( (irq_mask &= (MV643XX_ETH_EXT_IRQ_TX_DONE | MV643XX_ETH_EXT_IRQ_LINK_CHG)) ) {<br>
+               mp->irq_mask |= MV643XX_ETH_IRQ_EXT_ENA;<br>
+               mp->xirq_mask = irq_mask;<br>
+       } else {<br>
+               mp->xirq_mask = 0;<br>
+       }<br>
+<br>
+       return mp;<br>
+}<br>
+<br>
+struct mveth_private *<br>
+BSP_mve_setup(<br>
+       int              unit,<br>
+       rtems_id tid,<br>
+       void (*cleanup_txbuf)(void *user_buf, void *closure, int error_on_tx_occurred), <br>
+       void *cleanup_txbuf_arg,<br>
+       void *(*alloc_rxbuf)(int *p_size, uintptr_t *p_data_addr),<br>
+       void (*consume_rxbuf)(void *user_buf, void *closure, int len),<br>
+       void *consume_rxbuf_arg,<br>
+       int             rx_ring_size,<br>
+       int             tx_ring_size,<br>
+       int             irq_mask<br>
+)<br>
+{<br>
+       if ( irq_mask && 0 == tid ) {<br>
+               printk(DRVNAME": must supply a TID if irq_msk not zero\n");<br>
+               return 0;       <br>
+       }<br>
+<br>
+       return mve_setup_internal(<br>
+                               unit,<br>
+                               tid,<br>
+                               0, 0,<br>
+                               cleanup_txbuf, cleanup_txbuf_arg,<br>
+                               alloc_rxbuf,<br>
+                               consume_rxbuf, consume_rxbuf_arg,<br>
+                               rx_ring_size, tx_ring_size,<br>
+                               irq_mask);<br>
+}<br>
+<br>
+struct mveth_private *<br>
+BSP_mve_setup_1(<br>
+       int              unit,<br>
+       void     (*isr)(void *isr_arg),<br>
+       void     *isr_arg,<br>
+       void (*cleanup_txbuf)(void *user_buf, void *closure, int error_on_tx_occurred), <br>
+       void *cleanup_txbuf_arg,<br>
+       void *(*alloc_rxbuf)(int *p_size, uintptr_t *p_data_addr),<br>
+       void (*consume_rxbuf)(void *user_buf, void *closure, int len),<br>
+       void *consume_rxbuf_arg,<br>
+       int             rx_ring_size,<br>
+       int             tx_ring_size,<br>
+       int             irq_mask<br>
+)<br>
+{<br>
+       if ( irq_mask && 0 == isr ) {<br>
+               printk(DRVNAME": must supply an ISR if irq_msk not zero\n");<br>
+               return 0;       <br>
+       }<br>
+<br>
+       return mve_setup_internal(<br>
+                               unit,<br>
+                               0,<br>
+                               isr, isr_arg,<br>
+                               cleanup_txbuf, cleanup_txbuf_arg,<br>
+                               alloc_rxbuf,<br>
+                               consume_rxbuf, consume_rxbuf_arg,<br>
+                               rx_ring_size, tx_ring_size,<br>
+                               irq_mask);<br>
+}<br>
+<br>
+rtems_id<br>
+BSP_mve_get_tid(struct mveth_private *mp)<br>
+{<br>
+    return mp->tid;<br>
+}<br>
+<br>
+int<br>
+BSP_mve_detach(struct mveth_private *mp)<br>
+{<br>
+int unit = mp->port_num;<br>
+       BSP_mve_stop_hw(mp);<br>
+       if ( mp->irq_mask || mp->xirq_mask ) {<br>
+               if ( !BSP_remove_rtems_irq_handler( &irq_data[mp->port_num] ) )<br>
+                       return -1;<br>
+       }<br>
+       free( (void*)mp->ring_area, M_DEVBUF );<br>
+       memset(mp, 0, sizeof(*mp));<br>
+       __asm__ __volatile__("":::"memory");<br>
+       /* mark as unused */<br>
+       theMvEths[unit].arpcom.ac_if.if_init = 0;<br>
+       return 0;<br>
+}<br>
+<br>
+/* MAIN RX-TX ROUTINES<br>
+ *<br>
+ * BSP_mve_swipe_tx():  descriptor scavenger; releases mbufs<br>
+ * BSP_mve_send_buf():  xfer mbufs from IF to chip<br>
+ * BSP_mve_swipe_rx():  enqueue received mbufs to interface<br>
+ *                    allocate new ones and yield them to the<br>
+ *                    chip.<br>
+ */<br>
+<br>
+/* clean up the TX ring freeing up buffers */<br>
+int<br>
+BSP_mve_swipe_tx(struct mveth_private *mp)<br>
+{<br>
+int                                            rval = 0;<br>
+register MvEthTxDesc   d;<br>
+<br>
+       for ( d = mp->d_tx_t; d->buf_ptr; d = NEXT_TXD(d) ) {<br>
+<br>
+               INVAL_DESC(d);<br>
+<br>
+               if (    (TDESC_DMA_OWNED & d->cmd_sts)<br>
+                        &&     (uint32_t)d == MV_READ(MV643XX_ETH_CURRENT_SERVED_TX_DESC(mp->port_num)) )<br>
+                       break;<br>
+<br>
+               /* d->u_buf is only set on the last descriptor in a chain;<br>
+                * we only count errors in the last descriptor;<br>
+                */<br>
+               if ( d->u_buf ) {<br>
+                       mp->cleanup_txbuf(d->u_buf, mp->cleanup_txbuf_arg, (d->cmd_sts & TDESC_ERROR) ? 1 : 0);<br>
+                       d->u_buf = 0;<br>
+               }<br>
+<br>
+               d->buf_ptr = 0;<br>
+<br>
+               rval++;<br>
+       }<br>
+       mp->d_tx_t = d;<br>
+       mp->avail += rval;<br>
+<br>
+       return rval;<br>
+}<br>
+<br>
+/* allocate a new cluster and copy an existing chain there;<br>
+ * old chain is released...<br>
+ */<br>
+static struct mbuf *<br>
+repackage_chain(struct mbuf *m_head)<br>
+{<br>
+struct mbuf *m;<br>
+       MGETHDR(m, M_DONTWAIT, MT_DATA);<br>
+<br>
+       if ( !m ) {<br>
+               goto bail;<br>
+       }<br>
+<br>
+       MCLGET(m, M_DONTWAIT);<br>
+<br>
+       if ( !(M_EXT & m->m_flags) ) {<br>
+               m_freem(m);<br>
+               m = 0;<br>
+               goto bail;<br>
+       }<br>
+<br>
+       m_copydata(m_head, 0, MCLBYTES, mtod(m, caddr_t));<br>
+       m->m_pkthdr.len = m->m_len = m_head->m_pkthdr.len;<br>
+<br>
+bail:<br>
+       m_freem(m_head);<br>
+       return m;<br>
+}<br>
+<br>
+/* Enqueue a mbuf chain or a raw data buffer for transmission;<br>
+ * RETURN: #bytes sent or -1 if there are not enough descriptors<br>
+ *<br>
+ * If 'len' is <=0 then 'm_head' is assumed to point to a mbuf chain.<br>
+ * OTOH, a raw data packet may be send (non-BSD driver) by pointing<br>
+ * m_head to the start of the data and passing 'len' > 0.<br>
+ *<br>
+ * Comments: software cache-flushing incurs a penalty if the<br>
+ *           packet cannot be queued since it is flushed anyways.<br>
+ *           The algorithm is slightly more efficient in the normal<br>
+ *                      case, though.<br>
+ */<br>
+int<br>
+BSP_mve_send_buf(struct mveth_private *mp, void *m_head, void *data_p, int len)<br>
+{<br>
+int                                            rval;<br>
+register MvEthTxDesc   l,d,h;<br>
+register struct mbuf   *m1;<br>
+int                                            nmbs;<br>
+int                                            ismbuf = (len <= 0);<br>
+<br>
+/* Only way to get here is when we discover that the mbuf chain<br>
+ * is too long for the tx ring<br>
+ */<br>
+startover:<br>
+<br>
+       rval = 0;<br>
+<br>
+#ifdef MVETH_TESTING <br>
+       assert(m_head);<br>
+#endif<br>
+<br>
+       /* if no descriptor is available; try to wipe the queue */<br>
+       if ( (mp->avail < 1) && MVETH_CLEAN_ON_SEND(mp)<=0 ) {<br>
+               /* Maybe TX is stalled and needs to be restarted */<br>
+               mveth_start_tx(mp);<br>
+               return -1;<br>
+       }<br>
+<br>
+       h = mp->d_tx_h;<br>
+<br>
+#ifdef MVETH_TESTING <br>
+       assert( !h->buf_ptr );<br>
+       assert( !h->mb      );<br>
+#endif<br>
+<br>
+       if ( ! (m1 = m_head) )<br>
+               return 0;<br>
+<br>
+       if ( ismbuf ) {<br>
+               /* find first mbuf with actual data */<br>
+               while ( 0 == m1->m_len ) {<br>
+                       if ( ! (m1 = m1->m_next) ) {<br>
+                               /* end reached and still no data to send ?? */<br>
+                               m_freem(m_head);<br>
+                               return 0;<br>
+                       }<br>
+               }<br>
+       }<br>
+<br>
+       /* Don't use the first descriptor yet because BSP_mve_swipe_tx()<br>
+        * needs mp->d_tx_h->buf_ptr == NULL as a marker. Hence, we<br>
+        * start with the second mbuf and fill the first descriptor<br>
+        * last.<br>
+        */<br>
+<br>
+       l = h;<br>
+       d = NEXT_TXD(h);<br>
+<br>
+       mp->avail--;<br>
+<br>
+       nmbs = 1;<br>
+       if ( ismbuf ) {<br>
+                       register struct mbuf *m;<br>
+                       for ( m=m1->m_next; m; m=m->m_next ) {<br>
+                                       if ( 0 == m->m_len )<br>
+                                                       continue;       /* skip empty mbufs */<br>
+<br>
+                                       nmbs++;<br>
+<br>
+                                       if ( mp->avail < 1 && MVETH_CLEAN_ON_SEND(mp)<=0 ) {<br>
+                                                       /* Maybe TX was stalled - try to restart */<br>
+                                                       mveth_start_tx(mp);<br>
+<br>
+                                                       /* not enough descriptors; cleanup...<br>
+                                                        * the first slot was never used, so we start<br>
+                                                        * at mp->d_tx_h->next;<br>
+                                                        */<br>
+                                                       for ( l = NEXT_TXD(h); l!=d; l=NEXT_TXD(l) ) {<br>
+#ifdef MVETH_TESTING<br>
+                                                                       assert( l->mb == 0 );<br>
+#endif<br>
+                                                                       l->buf_ptr  = 0;<br>
+                                                                       l->cmd_sts  = 0;<br>
+                                                                       mp->avail++;<br>
+                                                       }<br>
+                                                       mp->avail++;<br>
+                                                       if ( nmbs > TX_AVAILABLE_RING_SIZE(mp) ) {<br>
+                                                                       /* this chain will never fit into the ring */<br>
+                                                                       if ( nmbs > mp->stats.maxchain )<br>
+                                                                                       mp->stats.maxchain = nmbs;<br>
+                                                                       mp->stats.repack++;<br>
+                                                                       if ( ! (m_head = repackage_chain(m_head)) ) {<br>
+                                                                                       /* no cluster available */<br>
+                                                                                       mp->stats.odrops++;<br>
+                                                                                       return 0;<br>
+                                                                       }<br>
+                                                                       goto startover;<br>
+                                                       }<br>
+                                                       return -1;<br>
+                                       }<br>
+<br>
+                                       mp->avail--;<br>
+<br>
+#ifdef MVETH_TESTING<br>
+                                       assert( d != h      );<br>
+                                       assert( !d->buf_ptr );<br>
+#endif<br>
+<br>
+                                       /* fill this slot */<br>
+                                       rval += mveth_assign_desc(d, m, TDESC_DMA_OWNED);<br>
+<br>
+                                       FLUSH_BUF(mtod(m, uint32_t), m->m_len);<br>
+<br>
+                                       l = d;<br>
+                                       d = NEXT_TXD(d);<br>
+<br>
+                                       FLUSH_DESC(l);<br>
+                       }<br>
+<br>
+               /* fill first slot - don't release to DMA yet */<br>
+               rval += mveth_assign_desc(h, m1, TDESC_FRST);<br>
+<br>
+<br>
+               FLUSH_BUF(mtod(m1, uint32_t), m1->m_len);<br>
+<br>
+       } else {<br>
+               /* fill first slot with raw buffer - don't release to DMA yet */<br>
+               rval += mveth_assign_desc_raw(h, data_p, len, TDESC_FRST);<br>
+<br>
+               FLUSH_BUF( (uint32_t)data_p, len);<br>
+       }<br>
+<br>
+       /* tag last slot; this covers the case where 1st==last */<br>
+       l->cmd_sts      |= TDESC_LAST | TDESC_INT_ENA;<br>
+       /* mbuf goes into last desc */<br>
+       l->u_buf         = m_head;<br>
+<br>
+<br>
+       FLUSH_DESC(l);<br>
+<br>
+       /* Tag end; make sure chip doesn't try to read ahead of here! */<br>
+       l->next->cmd_sts = 0;<br>
+       FLUSH_DESC(l->next);<br>
+<br>
+#ifdef MVETH_DEBUG_TX_DUMP<br>
+       if ( (mveth_tx_dump & (1<<mp->port_num)) ) {<br>
+               int ll,kk;<br>
+               if ( ismbuf ) {<br>
+                       struct mbuf *m;<br>
+                       for ( kk=0, m=m_head; m; m=m->m_next) {<br>
+                               for ( ll=0; ll<m->m_len; ll++ ) {<br>
+                                       printf("%02X ",*(mtod(m,char*) + ll));<br>
+                                       if ( ((++kk)&0xf) == 0 )<br>
+                                               printf("\n");<br>
+                               }<br>
+                       }<br>
+               } else {<br>
+                       for ( ll=0; ll<len; ) {<br>
+                               printf("%02X ",*((char*)data_p + ll));<br>
+                               if ( ((++ll)&0xf) == 0 )<br>
+                                       printf("\n");<br>
+                       }       <br>
+               }<br>
+               printf("\n");<br>
+       }<br>
+#endif<br>
+<br>
+       membarrier();<br>
+<br>
+       /* turn over the whole chain by flipping ownership of the first desc */<br>
+       h->cmd_sts |= TDESC_DMA_OWNED;<br>
+<br>
+       FLUSH_DESC(h);<br>
+<br>
+       membarrier();<br>
+<br>
+       /* notify the device */<br>
+       MV_WRITE(MV643XX_ETH_TRANSMIT_QUEUE_COMMAND_R(mp->port_num), MV643XX_ETH_TX_START(0));<br>
+<br>
+       /* Update softc */<br>
+       mp->stats.packet++;<br>
+       if ( nmbs > mp->stats.maxchain )<br>
+               mp->stats.maxchain = nmbs;<br>
+<br>
+       /* remember new head */<br>
+       mp->d_tx_h = d;<br>
+<br>
+       return rval; /* #bytes sent */<br>
+}<br>
+<br>
+int<br>
+BSP_mve_send_buf_raw(<br>
+       struct mveth_private *mp,<br>
+       void                 *head_p,<br>
+       int                   h_len,<br>
+       void                 *data_p,<br>
+    int                   d_len)<br>
+{<br>
+int                                            rval;<br>
+register MvEthTxDesc   l,d,h;<br>
+int                                            needed;<br>
+void                    *frst_buf;<br>
+int                     frst_len;<br>
+<br>
+       rval = 0;<br>
+<br>
+#ifdef MVETH_TESTING <br>
+       assert(header || data);<br>
+#endif<br>
+<br>
+       needed = head_p && data_p ? 2 : 1;<br>
+<br>
+       /* if no descriptor is available; try to wipe the queue */<br>
+       if (   ( mp->avail < needed )<br>
+        && ( MVETH_CLEAN_ON_SEND(mp) <= 0 || mp->avail < needed ) ) {<br>
+               /* Maybe TX was stalled and needs a restart */<br>
+               mveth_start_tx(mp);<br>
+               return -1;<br>
+       }<br>
+<br>
+       h = mp->d_tx_h;<br>
+<br>
+#ifdef MVETH_TESTING <br>
+       assert( !h->buf_ptr );<br>
+       assert( !h->mb      );<br>
+#endif<br>
+<br>
+       /* find the 'first' user buffer */<br>
+       if ( (frst_buf = head_p) ) {<br>
+               frst_len = h_len;<br>
+       } else {<br>
+               frst_buf = data_p;<br>
+               frst_len = d_len;<br>
+       }<br>
+<br>
+       /* Don't use the first descriptor yet because BSP_mve_swipe_tx()<br>
+        * needs mp->d_tx_h->buf_ptr == NULL as a marker. Hence, we<br>
+        * start with the second (optional) slot and fill the first<br>
+     * descriptor last.<br>
+        */<br>
+<br>
+       l = h;<br>
+       d = NEXT_TXD(h);<br>
+<br>
+       mp->avail--;<br>
+<br>
+       if ( needed > 1 ) {<br>
+               mp->avail--;<br>
+#ifdef MVETH_TESTING<br>
+               assert( d != h      );<br>
+               assert( !d->buf_ptr );<br>
+#endif<br>
+               rval += mveth_assign_desc_raw(d, data_p, d_len, TDESC_DMA_OWNED);<br>
+               FLUSH_BUF( (uint32_t)data_p, d_len );<br>
+               d->u_buf = data_p;<br>
+<br>
+               l = d;<br>
+               d = NEXT_TXD(d);<br>
+<br>
+               FLUSH_DESC(l);<br>
+       }<br>
+<br>
+       /* fill first slot with raw buffer - don't release to DMA yet */<br>
+       rval       += mveth_assign_desc_raw(h, frst_buf, frst_len, TDESC_FRST);<br>
+<br>
+       FLUSH_BUF( (uint32_t)frst_buf, frst_len);<br>
+<br>
+       /* tag last slot; this covers the case where 1st==last */<br>
+       l->cmd_sts |= TDESC_LAST | TDESC_INT_ENA;<br>
+<br>
+       /* first buffer of 'chain' goes into last desc */<br>
+       l->u_buf    = frst_buf;<br>
+<br>
+       FLUSH_DESC(l);<br>
+<br>
+       /* Tag end; make sure chip doesn't try to read ahead of here! */<br>
+       l->next->cmd_sts = 0;<br>
+       FLUSH_DESC(l->next);<br>
+<br>
+       membarrier();<br>
+<br>
+       /* turn over the whole chain by flipping ownership of the first desc */<br>
+       h->cmd_sts |= TDESC_DMA_OWNED;<br>
+<br>
+       FLUSH_DESC(h);<br>
+<br>
+       membarrier();<br>
+<br>
+       /* notify the device */<br>
+       MV_WRITE(MV643XX_ETH_TRANSMIT_QUEUE_COMMAND_R(mp->port_num), MV643XX_ETH_TX_START(0));<br>
+<br>
+       /* Update softc */<br>
+       mp->stats.packet++;<br>
+       if ( needed > mp->stats.maxchain )<br>
+               mp->stats.maxchain = needed;<br>
+<br>
+       /* remember new head */<br>
+       mp->d_tx_h = d;<br>
+<br>
+       return rval; /* #bytes sent */<br>
+}<br>
+<br>
+/* send received buffers upwards and replace them<br>
+ * with freshly allocated ones;<br>
+ * ASSUMPTION: buffer length NEVER changes and is set<br>
+ *                             when the ring is initialized.<br>
+ * TS 20060727: not sure if this assumption is still necessary - I believe it isn't.<br>
+ */<br>
+<br>
+int<br>
+BSP_mve_swipe_rx(struct mveth_private *mp)<br>
+{<br>
+int                                            rval = 0, err;<br>
+register MvEthRxDesc   d;<br>
+void                                   *newbuf;<br>
+int                                            sz;<br>
+uintptr_t                              baddr;<br>
+<br>
+       for ( d = mp->d_rx_t; ! (INVAL_DESC(d), (RDESC_DMA_OWNED & d->cmd_sts)); d=NEXT_RXD(d) ) {<br>
+<br>
+#ifdef MVETH_TESTING <br>
+               assert(d->u_buf);<br>
+#endif<br>
+<br>
+               err = (RDESC_ERROR & d->cmd_sts);<br>
+<br>
+               if ( err || !(newbuf = mp->alloc_rxbuf(&sz, &baddr)) ) {<br>
+                       /* drop packet and recycle buffer */<br>
+                       newbuf = d->u_buf;<br>
+                       mp->consume_rxbuf(0, mp->consume_rxbuf_arg, err ? -1 : 0);<br>
+               } else {<br>
+#ifdef MVETH_TESTING<br>
+                       assert( d->byte_cnt > 0 );<br>
+#endif<br>
+                       mp->consume_rxbuf(d->u_buf, mp->consume_rxbuf_arg, d->byte_cnt);<br>
+<br>
+#ifndef ENABLE_HW_SNOOPING<br>
+                       /* could reduce the area to max. ethernet packet size */<br>
+                       INVAL_BUF(baddr, sz);<br>
+#endif<br>
+                       d->u_buf    = newbuf;<br>
+                       d->buf_ptr  = CPUADDR2ENET(baddr);<br>
+                       d->buf_size = sz;<br>
+                       FLUSH_DESC(d);<br>
+               }<br>
+<br>
+               membarrier();<br>
+<br>
+               d->cmd_sts = RDESC_DMA_OWNED | RDESC_INT_ENA;<br>
+<br>
+               FLUSH_DESC(d);<br>
+<br>
+               rval++;<br>
+       }<br>
+       MV_WRITE(MV643XX_ETH_RECEIVE_QUEUE_COMMAND_R(mp->port_num), MV643XX_ETH_RX_START(0));<br>
+       mp->d_rx_t = d;<br>
+       return rval;<br>
+}<br>
+<br>
+/* Stop hardware and clean out the rings */<br>
+void<br>
+BSP_mve_stop_hw(struct mveth_private *mp)<br>
+{<br>
+MvEthTxDesc    d;<br>
+MvEthRxDesc    r;<br>
+int                    i;<br>
+<br>
+       mveth_disable_irqs(mp, -1);<br>
+<br>
+       mveth_stop_tx(mp->port_num);<br>
+<br>
+       /* cleanup TX rings */<br>
+       if (mp->d_tx_t) { /* maybe ring isn't initialized yet */<br>
+               for ( i=0, d=mp->tx_ring; i<mp->xbuf_count; i++, d++ ) {<br>
+                       /* should be safe to clear ownership */<br>
+                       d->cmd_sts &= ~TDESC_DMA_OWNED;<br>
+                       FLUSH_DESC(d);<br>
+               }<br>
+               FLUSH_BARRIER();<br>
+<br>
+               BSP_mve_swipe_tx(mp);<br>
+<br>
+#ifdef MVETH_TESTING <br>
+               assert( mp->d_tx_h == mp->d_tx_t );<br>
+               for ( i=0, d=mp->tx_ring; i<mp->xbuf_count; i++, d++ ) {<br>
+                       assert( !d->buf_ptr );<br>
+               }<br>
+#endif<br>
+       }<br>
+<br>
+       MV_WRITE(MV643XX_ETH_RECEIVE_QUEUE_COMMAND_R(mp->port_num), MV643XX_ETH_RX_STOP_ALL);<br>
+       while ( MV643XX_ETH_RX_ANY_RUNNING & MV_READ(MV643XX_ETH_RECEIVE_QUEUE_COMMAND_R(mp->port_num)) )<br>
+               /* poll-wait */;<br>
+<br>
+       /* stop serial port */<br>
+       MV_WRITE(MV643XX_ETH_SERIAL_CONTROL_R(mp->port_num),<br>
+               MV_READ(MV643XX_ETH_SERIAL_CONTROL_R(mp->port_num))<br>
+               & ~( MV643XX_ETH_SERIAL_PORT_ENBL | MV643XX_ETH_FORCE_LINK_FAIL_DISABLE | MV643XX_ETH_FORCE_LINK_PASS)<br>
+               );<br>
+<br>
+       /* clear pending interrupts */<br>
+       MV_WRITE(MV643XX_ETH_INTERRUPT_CAUSE_R(mp->port_num), 0);<br>
+       MV_WRITE(MV643XX_ETH_INTERRUPT_EXTEND_CAUSE_R(mp->port_num), 0);<br>
+<br>
+       /* cleanup RX rings */<br>
+       if ( mp->rx_ring ) {<br>
+               for ( i=0, r=mp->rx_ring; i<mp->rbuf_count; i++, r++ ) {<br>
+                       /* should be OK to clear ownership flag */<br>
+                       r->cmd_sts = 0;<br>
+                       FLUSH_DESC(r);<br>
+                       mp->consume_rxbuf(r->u_buf, mp->consume_rxbuf_arg, 0);<br>
+                       r->u_buf = 0;<br>
+               }<br>
+               FLUSH_BARRIER();<br>
+       }<br>
+<br>
+<br>
+}<br>
+<br>
+uint32_t mveth_serial_ctrl_config_val = MVETH_SERIAL_CTRL_CONFIG_VAL;<br>
+<br>
+/* Fire up the low-level driver<br>
+ *<br>
+ * - make sure hardware is halted<br>
+ * - enable cache snooping<br>
+ * - clear address filters<br>
+ * - clear mib counters<br>
+ * - reset phy<br>
+ * - initialize (or reinitialize) descriptor rings<br>
+ * - check that the firmware has set up a reasonable mac address.<br>
+ * - generate unicast filter entry for our mac address<br>
+ * - write register config values to the chip<br>
+ * - start hardware (serial port and SDMA)<br>
+ */<br>
+<br>
+void<br>
+BSP_mve_init_hw(struct mveth_private *mp, int promisc, unsigned char *enaddr)<br>
+{<br>
+int                                    i;<br>
+uint32_t                       v;<br>
+static int                     inited = 0;<br>
+<br>
+#ifdef MVETH_DEBUG<br>
+       printk(DRVNAME"%i: Entering BSP_mve_init_hw()\n", mp->port_num+1);<br>
+#endif<br>
+<br>
+       /* since enable/disable IRQ routine only operate on select bitsets<br>
+        * we must make sure everything is masked initially.<br>
+        */<br>
+       MV_WRITE(MV643XX_ETH_INTERRUPT_ENBL_R(mp->port_num),        0);<br>
+       MV_WRITE(MV643XX_ETH_INTERRUPT_EXTEND_ENBL_R(mp->port_num), 0);<br>
+<br>
+       BSP_mve_stop_hw(mp);<br>
+<br>
+       memset(&mp->stats, 0, sizeof(mp->stats));<br>
+<br>
+       mp->promisc = promisc;<br>
+<br>
+       /* MotLoad has cache snooping disabled on the ENET2MEM windows.<br>
+        * Some comments in (linux) indicate that there are errata<br>
+        * which cause problems which would be a real bummer.<br>
+        * We try it anyways...<br>
+        */<br>
+       if ( !inited ) {<br>
+       unsigned long disbl, bar;<br>
+               inited = 1;     /* FIXME: non-thread safe lazy init */<br>
+               disbl = MV_READ(MV643XX_ETH_BAR_ENBL_R);<br>
+                       /* disable all 6 windows */<br>
+                       MV_WRITE(MV643XX_ETH_BAR_ENBL_R, MV643XX_ETH_BAR_DISBL_ALL);<br>
+                       /* set WB snooping on enabled bars */<br>
+                       for ( i=0; i<MV643XX_ETH_NUM_BARS*8; i+=8 ) {<br>
+                               if ( (bar = MV_READ(MV643XX_ETH_BAR_0 + i)) && MV_READ(MV643XX_ETH_SIZE_R_0 + i) ) {<br>
+#ifdef ENABLE_HW_SNOOPING<br>
+                                       MV_WRITE(MV643XX_ETH_BAR_0 + i, bar | MV64360_ENET2MEM_SNOOP_WB);<br>
+#else<br>
+                                       MV_WRITE(MV643XX_ETH_BAR_0 + i, bar & ~MV64360_ENET2MEM_SNOOP_MSK);<br>
+#endif<br>
+                                       /* read back to flush fifo [linux comment] */<br>
+                                       (void)MV_READ(MV643XX_ETH_BAR_0 + i);<br>
+                               }<br>
+                       }<br>
+                       /* restore/re-enable */<br>
+               MV_WRITE(MV643XX_ETH_BAR_ENBL_R, disbl);<br>
+       }<br>
+<br>
+       mveth_clear_mib_counters(mp);<br>
+       mveth_clear_addr_filters(mp);<br>
+<br>
+/*     Just leave it alone...<br>
+       reset_phy();<br>
+*/<br>
+<br>
+       if ( mp->rbuf_count > 0 ) {<br>
+               mp->rx_ring = (MvEthRxDesc)MV643XX_ALIGN(mp->ring_area, RING_ALIGNMENT);<br>
+               mveth_init_rx_desc_ring(mp);<br>
+       }<br>
+<br>
+       if ( mp->xbuf_count > 0 ) {<br>
+               mp->tx_ring = (MvEthTxDesc)mp->rx_ring + mp->rbuf_count;<br>
+               mveth_init_tx_desc_ring(mp);<br>
+       }<br>
+<br>
+       if ( enaddr ) {<br>
+               /* set ethernet address from arpcom struct */<br>
+#ifdef MVETH_DEBUG<br>
+               printk(DRVNAME"%i: Writing MAC addr ", mp->port_num+1);<br>
+               for (i=5; i>=0; i--) {<br>
+                       printk("%02X%c", enaddr[i], i?':':'\n');<br>
+               }<br>
+#endif<br>
+               mveth_write_eaddr(mp, enaddr);<br>
+       }<br>
+<br>
+       /* set mac address and unicast filter */<br>
+<br>
+       {<br>
+       uint32_t machi, maclo;<br>
+               maclo = MV_READ(MV643XX_ETH_MAC_ADDR_LO(mp->port_num));<br>
+               machi = MV_READ(MV643XX_ETH_MAC_ADDR_HI(mp->port_num));<br>
+               /* ASSUME: firmware has set the mac address for us<br>
+                *         - if assertion fails, we have to do more work...<br>
+                */<br>
+               assert( maclo && machi && maclo != 0xffffffff && machi != 0xffffffff );<br>
+               mveth_ucfilter(mp, maclo&0xff, 1/* accept */);<br>
+       }<br>
+       <br>
+       /* port, serial and sdma configuration */<br>
+       v = MVETH_PORT_CONFIG_VAL;<br>
+       if ( promisc ) {<br>
+               /* multicast filters were already set up to<br>
+                * accept everything (mveth_clear_addr_filters())<br>
+                */<br>
+               v |= MV643XX_ETH_UNICAST_PROMISC_MODE;<br>
+       } else {<br>
+               v &= ~MV643XX_ETH_UNICAST_PROMISC_MODE;<br>
+       }<br>
+       MV_WRITE(MV643XX_ETH_PORT_CONFIG_R(mp->port_num),<br>
+                               v);<br>
+       MV_WRITE(MV643XX_ETH_PORT_CONFIG_XTEND_R(mp->port_num),<br>
+                               MVETH_PORT_XTEND_CONFIG_VAL);<br>
+<br>
+       v  = MV_READ(MV643XX_ETH_SERIAL_CONTROL_R(mp->port_num));<br>
+       v &= ~(MVETH_SERIAL_CTRL_CONFIG_MSK);<br>
+       v |= mveth_serial_ctrl_config_val;<br>
+       MV_WRITE(MV643XX_ETH_SERIAL_CONTROL_R(mp->port_num), v);<br>
+<br>
+       i = IFM_MAKEWORD(0, 0, 0, 0);<br>
+       if ( 0 == BSP_mve_media_ioctl(mp, SIOCGIFMEDIA, &i) ) {<br>
+           if ( (IFM_LINK_OK & i) ) {<br>
+                       mveth_update_serial_port(mp, i);<br>
+               }<br>
+       }<br>
+<br>
+       /* enable serial port */<br>
+       v  = MV_READ(MV643XX_ETH_SERIAL_CONTROL_R(mp->port_num));<br>
+       MV_WRITE(MV643XX_ETH_SERIAL_CONTROL_R(mp->port_num),<br>
+                               v | MV643XX_ETH_SERIAL_PORT_ENBL);<br>
+<br>
+#ifndef __BIG_ENDIAN__<br>
+#error "byte swapping needs to be disabled for little endian machines"<br>
+#endif<br>
+       MV_WRITE(MV643XX_ETH_SDMA_CONFIG_R(mp->port_num), MVETH_SDMA_CONFIG_VAL);<br>
+<br>
+       /* allow short frames */<br>
+       MV_WRITE(MV643XX_ETH_RX_MIN_FRAME_SIZE_R(mp->port_num), MVETH_MIN_FRAMSZ_CONFIG_VAL);<br>
+<br>
+       MV_WRITE(MV643XX_ETH_INTERRUPT_CAUSE_R(mp->port_num), 0);<br>
+       MV_WRITE(MV643XX_ETH_INTERRUPT_EXTEND_CAUSE_R(mp->port_num), 0);<br>
+       /* TODO: set irq coalescing */<br>
+<br>
+       /* enable Rx */<br>
+       if ( mp->rbuf_count > 0 ) {<br>
+               MV_WRITE(MV643XX_ETH_RECEIVE_QUEUE_COMMAND_R(mp->port_num), MV643XX_ETH_RX_START(0));<br>
+       }<br>
+<br>
+       mveth_enable_irqs(mp, -1);<br>
+<br>
+#ifdef MVETH_DEBUG<br>
+       printk(DRVNAME"%i: Leaving BSP_mve_init_hw()\n", mp->port_num+1);<br>
+#endif<br>
+}<br>
+<br>
+/* read ethernet address from hw to buffer */<br>
+void<br>
+BSP_mve_read_eaddr(struct mveth_private *mp, unsigned char *oeaddr)<br>
+{<br>
+int                            i;<br>
+uint32_t               x;<br>
+unsigned char  buf[6], *eaddr;<br>
+<br>
+       eaddr = oeaddr ? oeaddr : buf;<br>
+<br>
+       eaddr += 5;<br>
+       x = MV_READ(MV643XX_ETH_MAC_ADDR_LO(mp->port_num));<br>
+<br>
+       /* lo word */<br>
+       for (i=2; i; i--, eaddr--) {<br>
+               *eaddr = (unsigned char)(x & 0xff);<br>
+               x>>=8;<br>
+       }<br>
+<br>
+       x = MV_READ(MV643XX_ETH_MAC_ADDR_HI(mp->port_num));<br>
+       /* hi word */<br>
+       for (i=4; i; i--, eaddr--) {<br>
+               *eaddr = (unsigned char)(x & 0xff);<br>
+               x>>=8;<br>
+       }<br>
+<br>
+       if ( !oeaddr ) {<br>
+               printf("%02X",buf[0]);<br>
+               for (i=1; i<sizeof(buf); i++)<br>
+                       printf(":%02X",buf[i]);<br>
+               printf("\n");<br>
+       }<br>
+}<br>
+<br>
+int<br>
+BSP_mve_media_ioctl(struct mveth_private *mp, int cmd, int *parg)<br>
+{<br>
+int rval;<br>
+       /* alias cmd == 0,1 */<br>
+       switch ( cmd ) {<br>
+               case 0: cmd = SIOCGIFMEDIA;<br>
+                       break;<br>
+               case 1: cmd = SIOCSIFMEDIA;<br>
+               case SIOCGIFMEDIA:<br>
+               case SIOCSIFMEDIA:<br>
+                       break;<br>
+               default: return -1;<br>
+       }<br>
+       REGLOCK();<br>
+       rval = rtems_mii_ioctl(&mveth_mdio, mp, cmd, parg);<br>
+       REGUNLOCK();<br>
+       return rval;<br>
+}<br>
+<br>
+void<br>
+BSP_mve_enable_irqs(struct mveth_private *mp)<br>
+{<br>
+       mveth_enable_irqs(mp, -1);<br>
+}<br>
+<br>
+void<br>
+BSP_mve_disable_irqs(struct mveth_private *mp)<br>
+{<br>
+       mveth_disable_irqs(mp, -1);<br>
+}<br>
+<br>
+uint32_t<br>
+BSP_mve_ack_irqs(struct mveth_private *mp)<br>
+{<br>
+       return mveth_ack_irqs(mp, -1);<br>
+}<br>
+<br>
+<br>
+void<br>
+BSP_mve_enable_irq_mask(struct mveth_private *mp, uint32_t mask)<br>
+{<br>
+       mveth_enable_irqs(mp, mask);<br>
+}<br>
+<br>
+uint32_t<br>
+BSP_mve_disable_irq_mask(struct mveth_private *mp, uint32_t mask)<br>
+{<br>
+       return mveth_disable_irqs(mp, mask);<br>
+}<br>
+<br>
+uint32_t<br>
+BSP_mve_ack_irq_mask(struct mveth_private *mp, uint32_t mask)<br>
+{<br>
+       return mveth_ack_irqs(mp, mask);<br>
+}<br>
+<br>
+int<br>
+BSP_mve_ack_link_chg(struct mveth_private *mp, int *pmedia)<br>
+{<br>
+int media = IFM_MAKEWORD(0,0,0,0);<br>
+<br>
+       if ( 0 == BSP_mve_media_ioctl(mp, SIOCGIFMEDIA, &media)) {<br>
+               if ( IFM_LINK_OK & media ) {<br>
+                       mveth_update_serial_port(mp, media);<br>
+                       /* If TX stalled because there was no buffer then whack it */<br>
+                       mveth_start_tx(mp);<br>
+               }<br>
+               if ( pmedia )<br>
+                       *pmedia = media;<br>
+               return 0;<br>
+       }<br>
+       return -1;<br>
+}<br>
+<br>
+/* BSDNET SUPPORT/GLUE ROUTINES */<br>
+<br>
+static void<br>
+mveth_set_filters(struct ifnet *ifp);<br>
+<br>
+STATIC void<br>
+mveth_stop(struct mveth_softc *sc)<br>
+{<br>
+       BSP_mve_stop_hw(&sc->pvt);<br>
+       sc->arpcom.ac_if.if_timer = 0;<br>
+}<br>
+<br>
+/* allocate a mbuf for RX with a properly aligned data buffer<br>
+ * RETURNS 0 if allocation fails<br>
+ */<br>
+static void *<br>
+alloc_mbuf_rx(int *psz, uintptr_t *paddr)<br>
+{<br>
+struct mbuf            *m;<br>
+unsigned long  l,o;<br>
+<br>
+       MGETHDR(m, M_DONTWAIT, MT_DATA);<br>
+       if ( !m )<br>
+               return 0;<br>
+       MCLGET(m, M_DONTWAIT);<br>
+       if ( ! (m->m_flags & M_EXT) ) {<br>
+               m_freem(m);<br>
+               return 0;<br>
+       }<br>
+<br>
+       o = mtod(m, unsigned long);<br>
+       l = MV643XX_ALIGN(o, RX_BUF_ALIGNMENT) - o;<br>
+<br>
+       /* align start of buffer */<br>
+       m->m_data += l;<br>
+<br>
+       /* reduced length */<br>
+       l = MCLBYTES - l;<br>
+<br>
+       m->m_len   = m->m_pkthdr.len = l;<br>
+       *psz       = m->m_len;<br>
+       *paddr     = mtod(m, uintptr_t); <br>
+<br>
+       return (void*) m;<br>
+}<br>
+<br>
+static void consume_rx_mbuf(void *buf, void *arg, int len)<br>
+{<br>
+struct ifnet *ifp = arg;<br>
+struct mbuf    *m = buf;<br>
+<br>
+       if ( len <= 0 ) {<br>
+               ifp->if_iqdrops++;<br>
+               if ( len < 0 ) {<br>
+                       ifp->if_ierrors++;<br>
+               }<br>
+               if ( m )<br>
+                       m_freem(m);<br>
+       } else {<br>
+               struct ether_header *eh;<br>
+<br>
+                       eh                      = (struct ether_header *)(mtod(m, unsigned long) + ETH_RX_OFFSET);<br>
+                       m->m_len        = m->m_pkthdr.len = len - sizeof(struct ether_header) - ETH_RX_OFFSET - ETH_CRC_LEN;<br>
+                       m->m_data  += sizeof(struct ether_header) + ETH_RX_OFFSET;<br>
+                       m->m_pkthdr.rcvif = ifp;<br>
+<br>
+                       ifp->if_ipackets++;<br>
+                       ifp->if_ibytes  += m->m_pkthdr.len;<br>
+                       <br>
+                       if (0) {<br>
+                               /* Low-level debugging */<br>
+                               int i;<br>
+                               for (i=0; i<13; i++) {<br>
+                                       printf("%02X:",((char*)eh)[i]);<br>
+                               }<br>
+                               printf("%02X\n",((char*)eh)[i]);<br>
+                               for (i=0; i<m->m_len; i++) {<br>
+                                       if ( !(i&15) )<br>
+                                               printf("\n");<br>
+                                       printf("0x%02x ",mtod(m,char*)[i]);<br>
+                               }<br>
+                               printf("\n");<br>
+                       }<br>
+<br>
+                       if (0) {<br>
+                               /* Low-level debugging/testing without bsd stack */<br>
+                               m_freem(m);<br>
+                       } else {<br>
+                               /* send buffer upwards */<br>
+                               ether_input(ifp, eh, m);<br>
+                       }<br>
+       }<br>
+}<br>
+<br>
+static void release_tx_mbuf(void *buf, void *arg, int err)<br>
+{<br>
+struct ifnet *ifp = arg;<br>
+struct mbuf  *mb  = buf;<br>
+<br>
+       if ( err ) {<br>
+               ifp->if_oerrors++;<br>
+       } else {<br>
+               ifp->if_opackets++;<br>
+       }<br>
+       ifp->if_obytes += mb->m_pkthdr.len;<br>
+       m_freem(mb);<br>
+}<br>
+<br>
+static void<br>
+dump_update_stats(struct mveth_private *mp, FILE *f)<br>
+{<br>
+int      p = mp->port_num;<br>
+int      idx;<br>
+uint32_t v;<br>
+<br>
+       if ( !f )<br>
+               f = stdout;<br>
+<br>
+       fprintf(f, DRVNAME"%i Statistics:\n",        mp->port_num + 1);<br>
+       fprintf(f, "  # IRQS:                 %i\n", mp->stats.irqs);<br>
+       fprintf(f, "  Max. mbuf chain length: %i\n", mp->stats.maxchain);<br>
+       fprintf(f, "  # repacketed:           %i\n", mp->stats.repack);<br>
+       fprintf(f, "  # packets:              %i\n", mp->stats.packet);<br>
+       fprintf(f, "MIB Counters:\n");<br>
+       for ( idx = MV643XX_ETH_MIB_GOOD_OCTS_RCVD_LO>>2;<br>
+                       idx < MV643XX_ETH_NUM_MIB_COUNTERS;<br>
+                       idx++ ) {<br>
+               switch ( idx ) {<br>
+                       case MV643XX_ETH_MIB_GOOD_OCTS_RCVD_LO>>2:<br>
+                               mp->stats.mib.good_octs_rcvd += read_long_mib_counter(p, idx);<br>
+                               fprintf(f, mibfmt[idx], mp->stats.mib.good_octs_rcvd);<br>
+                               idx++;<br>
+                               break;<br>
+<br>
+                       case MV643XX_ETH_MIB_GOOD_OCTS_SENT_LO>>2:<br>
+                               mp->stats.mib.good_octs_sent += read_long_mib_counter(p, idx);<br>
+                               fprintf(f, mibfmt[idx], mp->stats.mib.good_octs_sent);<br>
+                               idx++;<br>
+                               break;<br>
+<br>
+                       default:<br>
+                               v = ((uint32_t*)&mp->stats.mib)[idx] += read_mib_counter(p, idx);<br>
+                               fprintf(f, mibfmt[idx], v);<br>
+                               break;<br>
+               }<br>
+       }<br>
+       fprintf(f, "\n");<br>
+}<br>
+<br>
+void<br>
+BSP_mve_dump_stats(struct mveth_private *mp, FILE *f)<br>
+{<br>
+       dump_update_stats(mp, f);<br>
+}<br>
+<br>
+/* BSDNET DRIVER CALLBACKS */<br>
+<br>
+static void<br>
+mveth_init(void *arg)<br>
+{<br>
+struct mveth_softc     *sc  = arg;<br>
+struct ifnet           *ifp = &sc->arpcom.ac_if;<br>
+int                 media;<br>
+<br>
+       BSP_mve_init_hw(&sc->pvt, ifp->if_flags & IFF_PROMISC, sc->arpcom.ac_enaddr);<br>
+<br>
+       media = IFM_MAKEWORD(0, 0, 0, 0);<br>
+       if ( 0 == BSP_mve_media_ioctl(&sc->pvt, SIOCGIFMEDIA, &media) ) {<br>
+           if ( (IFM_LINK_OK & media) ) {<br>
+                       ifp->if_flags &= ~IFF_OACTIVE;<br>
+               } else {<br>
+                       ifp->if_flags |= IFF_OACTIVE;<br>
+               }<br>
+       }<br>
+<br>
+       /* if promiscuous then there is no need to change */<br>
+       if ( ! (ifp->if_flags & IFF_PROMISC) )<br>
+               mveth_set_filters(ifp);<br>
+<br>
+       ifp->if_flags |= IFF_RUNNING;<br>
+       sc->arpcom.ac_if.if_timer = 0;<br>
+}<br>
+<br>
+/* bsdnet driver entry to start transmission */<br>
+static void<br>
+mveth_start(struct ifnet *ifp)<br>
+{<br>
+struct mveth_softc     *sc = ifp->if_softc;<br>
+struct mbuf                    *m  = 0;<br>
+<br>
+       while ( ifp->if_snd.ifq_head ) {<br>
+               IF_DEQUEUE( &ifp->if_snd, m );<br>
+               if ( BSP_mve_send_buf(&sc->pvt, m, 0, 0) < 0 ) {<br>
+                       IF_PREPEND( &ifp->if_snd, m);<br>
+                       ifp->if_flags |= IFF_OACTIVE;<br>
+                       break;<br>
+               }<br>
+               /* need to do this really only once<br>
+                * but it's cheaper this way.<br>
+                */<br>
+               ifp->if_timer = 2*IFNET_SLOWHZ;<br>
+       }<br>
+}<br>
+<br>
+/* bsdnet driver entry; */<br>
+static void<br>
+mveth_watchdog(struct ifnet *ifp)<br>
+{<br>
+struct mveth_softc     *sc = ifp->if_softc;<br>
+<br>
+       ifp->if_oerrors++;<br>
+       printk(DRVNAME"%i: watchdog timeout; resetting\n", ifp->if_unit);<br>
+<br>
+       mveth_init(sc);<br>
+       mveth_start(ifp);<br>
+}<br>
+<br>
+static void<br>
+mveth_set_filters(struct ifnet *ifp)<br>
+{<br>
+struct mveth_softc  *sc = ifp->if_softc;<br>
+uint32_t              v;<br>
+<br>
+       v = MV_READ(MV643XX_ETH_PORT_CONFIG_R(sc->pvt.port_num));<br>
+       if ( ifp->if_flags & IFF_PROMISC )<br>
+               v |= MV643XX_ETH_UNICAST_PROMISC_MODE;<br>
+       else<br>
+               v &= ~MV643XX_ETH_UNICAST_PROMISC_MODE;<br>
+       MV_WRITE(MV643XX_ETH_PORT_CONFIG_R(sc->pvt.port_num), v);<br>
+<br>
+       if ( ifp->if_flags & (IFF_PROMISC | IFF_ALLMULTI) ) {<br>
+               BSP_mve_mcast_filter_accept_all(&sc->pvt);<br>
+       } else {<br>
+               struct ether_multi     *enm;<br>
+               struct ether_multistep step;<br>
+<br>
+               BSP_mve_mcast_filter_clear( &sc->pvt );<br>
+               <br>
+               ETHER_FIRST_MULTI(step, (struct arpcom *)ifp, enm);<br>
+<br>
+               while ( enm ) {<br>
+                       if ( memcmp(enm->enm_addrlo, enm->enm_addrhi, ETHER_ADDR_LEN) )<br>
+                               assert( !"Should never get here; IFF_ALLMULTI should be set!" );<br>
+<br>
+                       BSP_mve_mcast_filter_accept_add(&sc->pvt, enm->enm_addrlo);<br>
+<br>
+                       ETHER_NEXT_MULTI(step, enm);<br>
+               }<br>
+       }<br>
+}<br>
+<br>
+/* bsdnet driver ioctl entry */<br>
+static int<br>
+mveth_ioctl(struct ifnet *ifp, ioctl_command_t cmd, caddr_t data)<br>
+{<br>
+struct mveth_softc     *sc   = ifp->if_softc;<br>
+struct ifreq           *ifr  = (struct ifreq *)data;<br>
+int                                    error = 0;<br>
+int                                    f;<br>
+<br>
+       switch ( cmd ) {<br>
+               case SIOCSIFFLAGS:<br>
+                       f = ifp->if_flags;<br>
+                       if ( f & IFF_UP ) {<br>
+                               if ( ! ( f & IFF_RUNNING ) ) {<br>
+                                       mveth_init(sc);<br>
+                               } else {<br>
+                                       if ( (f & IFF_PROMISC) != (sc->bsd.oif_flags & IFF_PROMISC) ) {<br>
+                                               /* Note: in all other scenarios the 'promisc' flag<br>
+                                                * in the low-level driver [which affects the way<br>
+                                                * the multicast filter is setup: accept none vs.<br>
+                                                * accept all in promisc mode] is eventually<br>
+                                                * set when the IF is brought up...<br>
+                                                */<br>
+                                               sc->pvt.promisc = (f & IFF_PROMISC);<br>
+<br>
+                                               mveth_set_filters(ifp);<br>
+                                       }<br>
+                                       /* FIXME: other flag changes are ignored/unimplemented */<br>
+                               }<br>
+                       } else {<br>
+                               if ( f & IFF_RUNNING ) {<br>
+                                       mveth_stop(sc);<br>
+                                       ifp->if_flags  &= ~(IFF_RUNNING | IFF_OACTIVE);<br>
+                               }<br>
+                       }<br>
+                       sc->bsd.oif_flags = ifp->if_flags;<br>
+               break;<br>
+<br>
+               case SIOCGIFMEDIA:<br>
+               case SIOCSIFMEDIA:<br>
+                       error = BSP_mve_media_ioctl(&sc->pvt, cmd, &ifr->ifr_media);<br>
+               break;<br>
+ <br>
+               case SIOCADDMULTI:<br>
+               case SIOCDELMULTI:<br>
+                       error = (cmd == SIOCADDMULTI)<br>
+                               ? ether_addmulti(ifr, &sc->arpcom)<br>
+                                   : ether_delmulti(ifr, &sc->arpcom);<br>
+<br>
+                       if (error == ENETRESET) {<br>
+                               if (ifp->if_flags & IFF_RUNNING) {<br>
+                                       mveth_set_filters(ifp);<br>
+                               }<br>
+                               error = 0;<br>
+                       }<br>
+               break;<br>
+<br>
+<br>
+               break;<br>
+<br>
+               case SIO_RTEMS_SHOW_STATS:<br>
+                       dump_update_stats(&sc->pvt, stdout);<br>
+               break;<br>
+<br>
+               default:<br>
+                       error = ether_ioctl(ifp, cmd, data);<br>
+               break;<br>
+       }<br>
+<br>
+       return error;<br>
+}<br>
+<br>
+/* DRIVER TASK */<br>
+<br>
+/* Daemon task does all the 'interrupt' work */<br>
+static void mveth_daemon(void *arg)<br>
+{<br>
+struct mveth_softc     *sc;<br>
+struct ifnet           *ifp;<br>
+rtems_event_set                evs;<br>
+       for (;;) {<br>
+               rtems_bsdnet_event_receive( 7, RTEMS_WAIT | RTEMS_EVENT_ANY, RTEMS_NO_TIMEOUT, &evs );<br>
+               evs &= 7;<br>
+               for ( sc = theMvEths; evs; evs>>=1, sc++ ) {<br>
+                       if ( (evs & 1) ) {<br>
+                               register uint32_t x;<br>
+<br>
+                               ifp = &sc->arpcom.ac_if;<br>
+<br>
+                               if ( !(ifp->if_flags & IFF_UP) ) {<br>
+                                       mveth_stop(sc);<br>
+                                       ifp->if_flags &= ~(IFF_UP|IFF_RUNNING);<br>
+                                       continue;<br>
+                               }<br>
+<br>
+                               if ( !(ifp->if_flags & IFF_RUNNING) ) {<br>
+                                       /* event could have been pending at the time hw was stopped;<br>
+                                        * just ignore...<br>
+                                        */<br>
+                                       continue;<br>
+                               }<br>
+<br>
+                               x = mveth_ack_irqs(&sc->pvt, -1);<br>
+<br>
+                               if ( MV643XX_ETH_EXT_IRQ_LINK_CHG & x ) {<br>
+                                       /* phy status changed */<br>
+                                       int media;<br>
+<br>
+                                       if ( 0 == BSP_mve_ack_link_chg(&sc->pvt, &media) ) {<br>
+                                               if ( IFM_LINK_OK & media ) {<br>
+                                                       ifp->if_flags &= ~IFF_OACTIVE;<br>
+                                                       mveth_start(ifp);<br>
+                                               } else {<br>
+                                                       /* stop sending */<br>
+                                                       ifp->if_flags |= IFF_OACTIVE;<br>
+                                               }<br>
+                                       }<br>
+                               }<br>
+                               /* free tx chain */<br>
+                               if ( (MV643XX_ETH_EXT_IRQ_TX_DONE & x) && BSP_mve_swipe_tx(&sc->pvt) ) {<br>
+                                       ifp->if_flags &= ~IFF_OACTIVE;<br>
+                                       if ( TX_AVAILABLE_RING_SIZE(&sc->pvt) == sc->pvt.avail )<br>
+                                               ifp->if_timer = 0;<br>
+                                       mveth_start(ifp);<br>
+                               }<br>
+                               if ( (MV643XX_ETH_IRQ_RX_DONE & x) )<br>
+                                       BSP_mve_swipe_rx(&sc->pvt);<br>
+<br>
+                               mveth_enable_irqs(&sc->pvt, -1);<br>
+                       }<br>
+               }<br>
+       }<br>
+}<br>
+<br>
+#ifdef  MVETH_DETACH_HACK<br>
+static int mveth_detach(struct mveth_softc *sc);<br>
+#endif<br>
+<br>
+<br>
+/* PUBLIC RTEMS BSDNET ATTACH FUNCTION */<br>
+int<br>
+rtems_mve_attach(struct rtems_bsdnet_ifconfig *ifcfg, int attaching)<br>
+{<br>
+char                           *unitName;<br>
+int                                    unit,i,cfgUnits;<br>
+struct mveth_softc *sc;<br>
+struct ifnet           *ifp;<br>
+<br>
+       unit = rtems_bsdnet_parse_driver_name(ifcfg, &unitName);<br>
+       if ( unit <= 0 || unit > MV643XXETH_NUM_DRIVER_SLOTS ) {<br>
+               printk(DRVNAME": Bad unit number %i; must be 1..%i\n", unit, MV643XXETH_NUM_DRIVER_SLOTS);<br>
+               return 1;<br>
+       }<br>
+<br>
+       sc  = &theMvEths[unit-1];<br>
+       ifp = &sc->arpcom.ac_if;<br>
+       sc->pvt.port_num = unit-1;<br>
+       sc->pvt.phy      = (MV_READ(MV643XX_ETH_PHY_ADDR_R) >> (5*sc->pvt.port_num)) & 0x1f;<br>
+<br>
+       if ( attaching ) {<br>
+               if ( ifp->if_init ) {<br>
+                       printk(DRVNAME": instance %i already attached.\n", unit);<br>
+                       return -1;<br>
+               }<br>
+<br>
+               for ( i=cfgUnits = 0; i<MV643XXETH_NUM_DRIVER_SLOTS; i++ ) {<br>
+                       if ( theMvEths[i].arpcom.ac_if.if_init )<br>
+                               cfgUnits++;<br>
+               }<br>
+               cfgUnits++; /* this new one */<br>
+<br>
+               /* lazy init of TID should still be thread-safe because we are protected<br>
+                * by the global networking semaphore..<br>
+                */<br>
+               if ( !mveth_tid ) {<br>
+                       /* newproc uses the 1st 4 chars of name string to build an rtems name */<br>
+                       mveth_tid = rtems_bsdnet_newproc("MVEd", 4096, mveth_daemon, 0);<br>
+               }<br>
+<br>
+               if ( !BSP_mve_setup( unit,<br>
+                                                    mveth_tid,<br>
+                                                    release_tx_mbuf, ifp,<br>
+                                                    alloc_mbuf_rx,<br>
+                                                    consume_rx_mbuf, ifp,<br>
+                                                    ifcfg->rbuf_count,<br>
+                                                    ifcfg->xbuf_count,<br>
+                                        BSP_MVE_IRQ_TX | BSP_MVE_IRQ_RX | BSP_MVE_IRQ_LINK) ) {<br>
+                       return -1;<br>
+               }<br>
+<br>
+               if ( nmbclusters < sc->pvt.rbuf_count * cfgUnits + 60 /* arbitrary */ )  {<br>
+                       printk(DRVNAME"%i: (mv643xx ethernet) Your application has not enough mbuf clusters\n", unit);<br>
+                       printk(     "                         configured for this driver.\n");<br>
+                       return -1;<br>
+               }<br>
+<br>
+               if ( ifcfg->hardware_address ) {<br>
+                       memcpy(sc->arpcom.ac_enaddr, ifcfg->hardware_address, ETHER_ADDR_LEN);<br>
+               } else {<br>
+                       /* read back from hardware assuming that MotLoad already had set it up */<br>
+                       BSP_mve_read_eaddr(&sc->pvt, sc->arpcom.ac_enaddr);<br>
+               }<br>
+<br>
+               ifp->if_softc                   = sc;<br>
+               ifp->if_unit                    = unit;<br>
+               ifp->if_name                    = unitName;<br>
+<br>
+               ifp->if_mtu                             = ifcfg->mtu ? ifcfg->mtu : ETHERMTU;<br>
+<br>
+               ifp->if_init                    = mveth_init;<br>
+               ifp->if_ioctl                   = mveth_ioctl;<br>
+               ifp->if_start                   = mveth_start;<br>
+               ifp->if_output                  = ether_output;<br>
+               /*<br>
+                * While nonzero, the 'if->if_timer' is decremented<br>
+                * (by the networking code) at a rate of IFNET_SLOWHZ (1hz) and 'if_watchdog'<br>
+                * is called when it expires. <br>
+                * If either of those fields is 0 the feature is disabled.<br>
+                */<br>
+               ifp->if_watchdog                = mveth_watchdog;<br>
+               ifp->if_timer                   = 0;<br>
+<br>
+               sc->bsd.oif_flags               = /* ... */<br>
+               ifp->if_flags                   = IFF_BROADCAST | IFF_MULTICAST | IFF_SIMPLEX;<br>
+<br>
+               /*<br>
+                * if unset, this set to 10Mbps by ether_ifattach; seems to be unused by bsdnet stack;<br>
+                * could be updated along with phy speed, though...<br>
+               ifp->if_baudrate                = 10000000;<br>
+               */<br>
+<br>
+               /* NOTE: ether_output drops packets if ifq_len >= ifq_maxlen<br>
+                *       but this is the packet count, not the fragment count!<br>
+               ifp->if_snd.ifq_maxlen  = sc->pvt.xbuf_count;<br>
+               */<br>
+               ifp->if_snd.ifq_maxlen  = ifqmaxlen;<br>
+<br>
+#ifdef  MVETH_DETACH_HACK<br>
+               if ( !ifp->if_addrlist ) /* do only the first time [reattach hack] */<br>
+#endif<br>
+               {<br>
+                       if_attach(ifp);<br>
+                       ether_ifattach(ifp);<br>
+               }<br>
+<br>
+       } else {<br>
+#ifdef  MVETH_DETACH_HACK<br>
+               if ( !ifp->if_init ) {<br>
+                       printk(DRVNAME": instance %i not attached.\n", unit);<br>
+                       return -1;<br>
+               }<br>
+               return mveth_detach(sc);<br>
+#else<br>
+               printk(DRVNAME": interface detaching not implemented\n");<br>
+               return -1;<br>
+#endif<br>
+       }<br>
+<br>
+       return 0;<br>
+}<br>
+<br>
+/* EARLY PHY ACCESS */<br>
+static int<br>
+mveth_early_init(int idx)<br>
+{<br>
+       if ( idx < 0 || idx >= MV643XXETH_NUM_DRIVER_SLOTS )<br>
+               return -1;<br>
+<br>
+       /* determine the phy */<br>
+       theMvEths[idx].pvt.phy = (MV_READ(MV643XX_ETH_PHY_ADDR_R) >> (5*idx)) & 0x1f;<br>
+       return 0;<br>
+}<br>
+<br>
+static int<br>
+mveth_early_read_phy(int idx, unsigned reg)<br>
+{<br>
+int rval;<br>
+<br>
+       if ( idx < 0 || idx >= MV643XXETH_NUM_DRIVER_SLOTS )<br>
+               return -1;<br>
+<br>
+       rval = mveth_mii_read(&theMvEths[idx].pvt, reg);<br>
+       return rval < 0 ? rval : rval & 0xffff;<br>
+}<br>
+<br>
+static int<br>
+mveth_early_write_phy(int idx, unsigned reg, unsigned val)<br>
+{<br>
+       if ( idx < 0 || idx >= MV643XXETH_NUM_DRIVER_SLOTS )<br>
+               return -1;<br>
+<br>
+       mveth_mii_write(&theMvEths[idx].pvt, reg, val);<br>
+       return 0;<br>
+}<br>
+<br>
+rtems_bsdnet_early_link_check_ops<br>
+rtems_mve_early_link_check_ops = {<br>
+       init:           mveth_early_init,<br>
+       read_phy:       mveth_early_read_phy,<br>
+       write_phy:      mveth_early_write_phy,<br>
+       name:           DRVNAME,<br>
+       num_slots:      MAX_NUM_SLOTS<br>
+};<br>
+<br>
+/* DEBUGGING */<br>
+<br>
+#ifdef MVETH_DEBUG<br>
+/* Display/dump descriptor rings */<br>
+<br>
+int<br>
+mveth_dring(struct mveth_softc *sc)<br>
+{<br>
+int i;<br>
+if (1) {<br>
+MvEthRxDesc pr;<br>
+printf("RX:\n");<br>
+<br>
+       for (i=0, pr=sc->pvt.rx_ring; i<sc->pvt.rbuf_count; i++, pr++) {<br>
+#ifndef ENABLE_HW_SNOOPING<br>
+               /* can't just invalidate the descriptor - if it contains<br>
+                * data that hasn't been flushed yet, we create an inconsistency...<br>
+                */<br>
+               rtems_bsdnet_semaphore_obtain();<br>
+               INVAL_DESC(pr);<br>
+#endif<br>
+               printf("cnt: 0x%04x, size: 0x%04x, stat: 0x%08x, next: 0x%08x, buf: 0x%08x\n",<br>
+                       pr->byte_cnt, pr->buf_size, pr->cmd_sts, (uint32_t)pr->next_desc_ptr, pr->buf_ptr);<br>
+<br>
+#ifndef ENABLE_HW_SNOOPING<br>
+               rtems_bsdnet_semaphore_release();<br>
+#endif<br>
+       }<br>
+}<br>
+if (1) {<br>
+MvEthTxDesc pt;<br>
+printf("TX:\n");<br>
+       for (i=0, pt=sc->pvt.tx_ring; i<sc->pvt.xbuf_count; i++, pt++) {<br>
+#ifndef ENABLE_HW_SNOOPING<br>
+               rtems_bsdnet_semaphore_obtain();<br>
+               INVAL_DESC(pt);<br>
+#endif<br>
+               printf("cnt: 0x%04x, stat: 0x%08x, next: 0x%08x, buf: 0x%08x, mb: 0x%08x\n",<br>
+                       pt->byte_cnt, pt->cmd_sts, (uint32_t)pt->next_desc_ptr, pt->buf_ptr,<br>
+                       (uint32_t)pt->mb);<br>
+<br>
+#ifndef ENABLE_HW_SNOOPING<br>
+               rtems_bsdnet_semaphore_release();<br>
+#endif<br>
+       }<br>
+}<br>
+       return 0;<br>
+}<br>
+<br>
+#endif<br>
+<br>
+/* DETACH HACK DETAILS */<br>
+<br>
+#ifdef  MVETH_DETACH_HACK<br>
+int<br>
+_cexpModuleFinalize(void *mh)<br>
+{<br>
+int i;<br>
+       for ( i=0; i<MV643XXETH_NUM_DRIVER_SLOTS; i++ ) {<br>
+               if ( theMvEths[i].arpcom.ac_if.if_init ) {<br>
+                       printf("Interface %i still attached; refuse to unload\n", i+1);<br>
+                       return -1;<br>
+               }<br>
+       }<br>
+       /* delete task; since there are no attached interfaces, it should block<br>
+        * for events and hence not hold the semaphore or other resources...<br>
+        */<br>
+       rtems_task_delete(mveth_tid);<br>
+       return 0;<br>
+}<br>
+<br>
+/* ugly hack to allow unloading/reloading the driver core.<br>
+ * needed because rtems' bsdnet release doesn't implement<br>
+ * if_detach(). Therefore, we bring the interface down but<br>
+ * keep the device record alive...<br>
+ */<br>
+static void<br>
+ether_ifdetach_pvt(struct ifnet *ifp)<br>
+{<br>
+        ifp->if_flags = 0;<br>
+        ifp->if_ioctl = 0;<br>
+        ifp->if_start = 0;<br>
+        ifp->if_watchdog = 0;<br>
+        ifp->if_init  = 0;<br>
+}<br>
+<br>
+static int<br>
+mveth_detach(struct mveth_softc *sc)<br>
+{<br>
+struct ifnet   *ifp = &sc->arpcom.ac_if;<br>
+       if ( ifp->if_init ) {<br>
+               if ( ifp->if_flags & (IFF_UP | IFF_RUNNING) ) {<br>
+                       printf(DRVNAME"%i: refuse to detach; interface still up\n",sc->pvt.port_num+1);<br>
+                       return -1;<br>
+               }<br>
+               mveth_stop(sc);<br>
+/* not implemented in BSDnet/RTEMS (yet) but declared in header */<br>
+#define ether_ifdetach ether_ifdetach_pvt<br>
+               ether_ifdetach(ifp);<br>
+       }<br>
+       free( (void*)sc->pvt.ring_area, M_DEVBUF );<br>
+       sc->pvt.ring_area = 0;<br>
+       sc->pvt.tx_ring   = 0;<br>
+       sc->pvt.rx_ring   = 0;<br>
+       sc->pvt.d_tx_t    = sc->pvt.d_tx_h   = 0;<br>
+       sc->pvt.d_rx_t    = 0;<br>
+       sc->pvt.avail     = 0;<br>
+       /* may fail if ISR was not installed yet */<br>
+       BSP_remove_rtems_irq_handler( &irq_data[sc->pvt.port_num] );<br>
+       return 0;<br>
+}<br>
+<br>
+#ifdef MVETH_DEBUG<br>
+struct rtems_bsdnet_ifconfig mveth_dbg_config = {<br>
+       name:                           DRVNAME"1",<br>
+       attach:                         rtems_mve_attach,<br>
+       ip_address:                     "192.168.2.10",         /* not used by rtems_bsdnet_attach */<br>
+       ip_netmask:                     "255.255.255.0",        /* not used by rtems_bsdnet_attach */<br>
+       hardware_address:       0, /* (void *) */<br>
+       ignore_broadcast:       0,                                      /* TODO driver should honour this  */<br>
+       mtu:                            0,<br>
+       rbuf_count:                     0,                                      /* TODO driver should honour this  */<br>
+       xbuf_count:                     0,                                      /* TODO driver should honour this  */<br>
+};<br>
+#endif<br>
+#endif<br>
-- <br>
2.39.3<br>
<br>
_______________________________________________<br>
devel mailing list<br>
<a href="mailto:devel@rtems.org" target="_blank">devel@rtems.org</a><br>
<a href="http://lists.rtems.org/mailman/listinfo/devel" rel="noreferrer" target="_blank">http://lists.rtems.org/mailman/listinfo/devel</a><br>
</blockquote></div>