PCI API (was : Add support for DEC/Intel 21150 PCI-PCI bridge)

gregory.menke at gsfc.nasa.gov gregory.menke at gsfc.nasa.gov
Wed Mar 2 15:55:27 UTC 2005


Kate Feng writes:
 > Till Straumann wrote:
 > 
 > > gregory.menke at gsfc.nasa.gov wrote:
 > >
 > > >Till Straumann writes:
 > > > > gregory.menke at gsfc.nasa.gov wrote:
 > > > >
 > > > > >Till Straumann writes:
 > > >
 > > > > OTOH, I suggested to collapse the individual ranges of sub-busses behind
 > > > > peer hostbridges
 > > > > into one single range in order to make the presence of peers transparent
 > > > > to the API.
 > > >
 > > >This is the right way to go in my opinion.
 > 
 > Pre assigning a range of bus number  was what I had in  mind  as an alternative
 > 
 > if one does insist not to change the PCI  API of the previous written drivers.
 > I  never meant to hardcode the PCI bus number or to write 255 copies of
 > drivers.
 > I meant that  the BSP_pciFindDevice() should return the PCI number as well in
 > additional to the bus number,  device number and function number to reflect the

The only way I can see this working would be to add a "pci #" to all
the pci functions, in addition to the bus, slot and function numbers.
I don't think its the right approach.


 > hardware architecture.  It would be less confusing  that the PCI number is used
 > as one
 > of the paramerts  in pci_write_config_xxx() and pci_read_config_xxx().
 > After all,  each of the PCI0 and PCI1  have 66 MHZ of PCI bandwidth for the PCI
 > 
 > devices.  Bypassing the PCI number,  one would have to keep remembering
 > bus number 0 .. 7  share  66 MHZ bandwidth, and bus number 8..15 share
 > another 66 MHZ  bandwidth.  This might not be a big deal anyway if it is
 > really painful to modify the previous written drivers.
 
All this is highly idiosyncratic to the hardware you're working with.
It might make sense there, but it doesn't on other machines.

  
 > I thoguht  pre assigning eight buses on each PCI is enough.   There should be
 > no need to set it to be 16.  Right ?
 
No, each pci bus tree should consume whatever number of busses it
needs, 8, 99, 200, whatever.  The pci api should have a routing table
identifying what ranges of busses are associated with each pci bus
tree.  The find device function should know about this routing table
and search all trees.  Presumably the whole routing table thing can be
#ifdef'ed out when compiling for single pci tree architectures.

If there is concern about dynamically extending the number of busses
via hot-plugging or whatever, then each tree should still consume its
necessary quantity of bus numbers and the pci setup routine should add
some #defined constant to offset the starting bus number of the next
tree.

I imagine a compromise could be made, dividing the bus space up into
regions, each pci device tree being given one region- pretty much what
you're suggesting if I understand you.  I'd rather have that than a
"pci #" parameter added to all the api functions.

 

 > > >
 > > > > E.g., if a given board has 2 peer hostbridges and a given PCI
 > > > > configuration (i.e., native
 > > > > devices + PMC cards and extensions etc)  results in the first hostbridge
 > > > > having 8 sub-busses
 > > > > and the second one 5 then I would collapse the two bus trees into one
 > > > > ranging from
 > > > > bus #0..12. Only the configuration access routines (which are a  BSP
 > > > > dependent piece)
 > > > > would know that bus #0..7 actually reside at the first hostbridge and
 > > > > 8..12 -> bus #0..4
 > > > > at the second one.
 > > >
 > > >But if the bus numbers are set up properly than it simply doesn't
 > > >matter.
 > > >
 > > Sure - but unless the peer hostbridge hardware has some magic to handle
 > > peers
 > > you have to handle this in software. Maybe the MVME5500 does not have
 > > this magic
 > > (don't have the discovery docs), i.e., the peer bridges each have their own
 > > individual 'space' of b/d/f-tuples. In this case, we must handle peer
 > > bridges in software,
 > 
 > My magic tale is another  concept.  The original proposal would limit the
 > future applications of hot changing the PCI system configuration leaving  other
 >
 > part of PCI  devices (e.g. 1 GHZ networrk) intact while the system
 > is still running.

You are making wild assumptions here about bus architectures and
startup sequences.

Gregm




More information about the users mailing list