PCI API

gregory.menke at gsfc.nasa.gov gregory.menke at gsfc.nasa.gov
Wed Mar 2 19:21:49 UTC 2005


Kate Feng writes:
 > 
 > I did have concern about limiting the number of buses, which is
 > why I was reluctant  to go with the alternatives either.  Eight
 > seems to be enough for my application, but I was not sure if it will
 > be enought for other larger applications and larger frame.

Are you sure you want to impose your architecture's design &
preconceptions on everyone else?  What makes you think that 8 or 16 or
32 is sufficient?  Have you also figured out how you're going to
partition the pci address spaces?


 > I still think  that  adding the PCI number into the PCI API is a
 > more versatile and long term solution to accomodate any form
 > of  bus architect or any (super large or small)  scale of application.
 > See more below.

What makes you so sure of that?  And have you evaluated the impact
that adding a parameter to all the pci calls will have on code
generation for the affected architectures?


 > On the mvme5500, PCI0 allows seven more PCI devices via the PMC
 > slot and PMCspan.    The PCI1 is pre occcupied by the 1GHZ network and
 > it has one more PMC slot (no PMCspan allowed). Imagine that one
 > application has  five devices on PCI0.  One day, it requires to
 > hot adding   two more  PCI devices while the system has
 > been serving  for three months.  However, the application can not
 > disturb any other deviceson PCI1 - at least for the prebuilt 1GHZ
 > network.  What is the new bus number for the two added devices
 > based on the original proposal  ?

Assign a range of bus numbers to each tree so the bridges can be
reconfigured as necessary.  Once you start assuming things about how
busses will be deployed, you start breaking things.

If you're going to hot-plug pci bus trees, then you'd better have
support for it all the way up from OS to application layer, its not
something that you can conveniently bury in the pci api.  If nothing
else, you still have to manage the dynamic memory/IO mappings on a
per-tree basis, not to mention interrupt vectors and driver loading,
initialization and shutdown, so you're already going to be doing a lot
of work building infrastructure.  To get those things right, you'd
better have a well conceived and well refined pci api or you'll end up
with a mess.  This is a real-time OS, not Windows- we're supposed to
do things as right as we're able.

What happens when the next mvme generation comes around with 15 pci
bridges onto different trees, the ethernet device moves from pci tree
0 to pci tree 12 and a few more ethernet ports are added to other
trees?  This is a bizarre example of course, but the pci interface had
better handle it just as easily as something straightforward.

Gregm




More information about the users mailing list