PowerPC Burst PCI access

Feng, Shuchen feng at bnl.gov
Fri Apr 8 03:23:36 UTC 2005


Hi Peter,

The whole 512MB of memory on the mvme5500 is cachable.  The 1GHZ NIC
is on the PCI.  It's performance is slightly higher than that of the
vxWorks at this point even though I think a patch in the RTEMS might
even make it higher. If anyone is interested in the data, I will
post it.


The mvme5500 has eight DMAs, more than enough.


I do not think you updated your PCI to the unified PCI yet.
If so, can you do PCIx_read_config_dword(1,0,10,0, offset, &data)
by setting  offset from 0 to 0x40 at an interval of 4 ?
For example :
PCIx_read_config_dword(1,0,10,0,0, &data)
PCIx_read_config_dword(1,0,10,0,4, &data)
...
...
PCIx_read_config_dword(1,0,10,0,0x40, &data)


What is all the returned value ?
Also what does inl(0xc00) and  inl(0xc80) 
return ?


To digress a moment, is the code you wrote for 
PMCspan sharable by the RTEMS community ?


Kate

-----Original Message-----
From: Till Straumann
To: Peter Dufault
Cc: RTEMS Users
Sent: 4/7/2005 9:39 PM
Subject: Re: PowerPC Burst PCI access

Peter Dufault wrote:

> These comments relate to the MVME5500 VME powerPC port and a PCI data 
> acquisition card.
>
> In my control application I noticed a while ago that reading back the 
> data from the PCI data acquisition board is extremely slow.  The board

> (acromag PMC341) collects the data into a pre-fetchable memory buffer 
> that it says will transfer data at 40 MByte/s using burst reads.  I'm 
> seeing about 4 MByte/s using naive reads and I eventually need to 
> solve this problem.

That seems awfully slow. However, I can confirm that this matches what I

observe when I do PIO
to my PMC card mentioned below (I can only do 16-bit PIO, though).
When I use DMA the throughput is significantly better. I observe 
latencies of 25us avg. and 45us worst case
for transferring 512 bytes [latency measures setting-up DMA, actual data

xfer, dispatching of 'dma done' ISR].

> With a 20KHz control loop I'm losing about 1/4 of the CPU using naive 
> reads. 

So the memory is on the PMC card - it's not that the PMC can DMA data 
over to memory on the MVME5500,
right?

>
>
> As I understand it, if the memory region that the PowerPC fetches from

> is cacheable then the PowerPC will attempt to burst cache-line size 
> reads.   I think the mapping from PowerPC to DAQ card is first through

> the PowerPC BAT registers and then a PCI window.

Hmm - I suspect the PCI memory is mapped non-cacheable. Don't know if 
it's safe to make it cachable (does the PCI bus + the MVME5500's PCI 
bridge implement coherency?). You have to be careful to avoid having 
stale data in the cache.

Anyways, probably the best solution would be using DMA (don't know if 
the MVME5500 has a DMA controller).

Otherwise, you could try to use e.g., a page table entry (or a spare 
BAT) to map that special region cacheable
and then use 'dcbi' prior to each new transfer.

Sidenote: look at the PMC16AI64SS by generalstandards.com -- 64 x 16bit 
@ 100kHz on a PMC module
with a fifo + DMA controller. A really nice card and less $/channel, it 
seems.

>
>
> Is my understanding about right?  Is there some code I should look at 
> as an example?
>
> I also tried fetching the memory as a "long double" but I started 
> getting exceptions in my tasks. I assume my ISR started using floating

> point resources during the copy which caused this, so I also need a 
> pointer to a method of doing cache-line size memory fetches without 
> affecting the floating point state. 

Bad - you *must not* use the FPU from an ISR. Results in FPreg 
corruption - I just filed a PR today
regarding this...

HTH

-- Till

>
>
> Thanks,
>
> Peter
>
> Peter Dufault
> HD Associates, Inc.
>




More information about the users mailing list