gregory.menke at gsfc.nasa.gov gregory.menke at gsfc.nasa.gov
Thu Mar 3 04:21:04 UTC 2005

I think you have arranged a solution that only you will use- one more
bsp-specific pci kludge instead of a proper unified api.  Best of

Kate Feng writes:
 > Conclusion :
 > Done!  Eveyone will be  happy.   Thanks to all  for clearing  the
 > confusion.  I will  add :
 > #define  MaxPCIbridgePerBandwidth  8      /* Did not figure out a better name yet */
 > to the  bsp.h of mvme5500.  Thus I  can use the original PCI API.

 > However, in your   previous E-mail, you wrote :
 > >That may be acceptable for one particular current hardware implementation, but is completely unacceptable for
 > >general use.
 > >Once you transition to PCI-X it is common to have one bus segment per connector so that you can run at the
 > >maximum data rate (133M transfers/s, or 266M if you use the dual rate variant).  PCI Express systems will have
 > >similar issues, because that architecture is inherently point to point and relies on switches.  The switches
 > >create multiple bus segments and look to the software like multiple PCI-PCI bridges.
 > >Some of the large servers I work on have 64 or more bus segments.  Those systems typically run a general purpose
 > >OS rather than a real time OS, but it seems unnecessarily limiting to design an API into the RTOS or even into
 > > the BSP which is known to be far below the architectural limitations of the bus.
 > Thus, you can tune your own value of the  MaxPCIbridgePerBandwidth
 > in your bsp.h.
 > How does that sound ?

Please keep your modifications out of the shared components of the PPC
pci api is all I can say.


More information about the users mailing list