Dual-Ported Memory & Shared Memory Support
Chris Caudle
chris at chriscaudle.org
Mon Oct 21 22:12:14 UTC 2002
On Friday 18 October 2002 7:12 am, Salman wrote:
> But what I don't understand is the Shared Memory Support. Can someone
> tell me what was this Shared Memory Support designed for ? and whether I
> would need it or not for SCI network.
That probably depends on how you want to use the SCI connections. Do you
want to use it for fast message passing, or to combine the physical memory
of multiple nodes into a combined address space?
> If porting of SCI drivers is to be very time-consuming (which
> by the looks of the driver source code, it is),
Again, it all depends on how you want to use SCI. SCI has a memory mapped
mode which should be fairly simple to use. The "driver" would consist
mostly of hardware initialization and memory map setup. Once you have that
done, everything is just write/read commands, which could be as simple as
issuing a load or store command from your processor, to setting up a DMA
operation in your memory controller to let the memory controller move big
chunks of data without processor intervention.
I get the impression from what you wrote that what you actually mean when
you say "SCI driver" is really "SCI driver plus Linux MPI implementation."
> Aside from this, can I have your opinions on Message passing over
> Ethernet. we might consider having
> RTEMS on an Ethernet based computer cluster (much slower obviously).
Depends on what kind of latency you need.
Often when someone says "Ethernet" what he really means is "TCP/IP over
Ethernet."
First, if you don't need routing and the ability to go over wide area
networks, skip TCP/IP.
Second, if you are going to use raw Ethernet, then think very carefully
about how your driver works. In a virtual memory system like Unix, Linux,
WIndows NT, etc. your data typically has to cross a boundary between
application address space and kernel address space. That usually involves
a copy of the data from one memory buffer to another, which wastes time.
Some schemes avoid the memory copy, but then you have to make sure that the
memory buffer gets locked into physical memory so that the virtual memory
system won't swap it out, and at some point still have to translate between
virtual memory address space and physical or kernel address, all additional
complications that you really don't need here.
In a flat memory model system without memory space protection, such as
RTEMS, you have the option of just passing around pointers to the data, and
never copying the data more than needed.
This requires a rethinking of the driver, but if you design your driver to
wait on addresses to be placed in a message queue, and then have the
Ethernet hardware go grab the data once it has the pointer, you should be
able to get relatively decent performance.
With Gigabit Ethernet, you should be aiming for about 100 microseconds
transmit latency.
With SCI, or any of it's more recent follow up protocols like RapidIO or
Infiniband, you could have latency in the 2 to 10 microsecond range.
That is a big difference, but there is a large cost difference, so you have
to evaluate whether you really need or can take advantage of the lower
latency offered by SCI/RIO/IB.
--
Chris Caudle
More information about the users
mailing list