IRQ latency and context switching on mvme5500 (was Re: RTEMS mvme5500 bsp)
Peter Dufault
dufault at hda.com
Thu Jul 28 17:10:48 UTC 2005
On Jul 27, 2005, at 10:26 PM, Kate Feng wrote:
>
> Based on the test result, it shows that RTEMS-mvme5500 is
> more deterministic and steadier than vxWorks-mvme5500
> for the highest priority task. For both the idle and loaded
> system,
> RTEMS-mvme5500 "GUARANTEED" twice or three times
> faster response time in a steadier state. The "worst case" is a
> critical factor in consideration of the real-time system.
>
>
>
Can you also include all the identifying information about the
vxWorks version, and whether you've modified anything from the
distribution?
I've also got a fairly big application running on RTEMS-mvme5500 and
now vxWorks 5.5. I don't know why Wind River sold my client 5.5, I
wasn't involved in that, maybe 6.x doesn't run on the MVME5500?
Maybe 6.x is more up-to-date? If anyone knows, please let me know.
FIRST, THOUGH, RTEMS doesn't do anything with the 64-bit counter on
the PPC, right? That's where I get my timings.
I can't get to the system to provide full details, but here are a few
notes from memory.
1. The vxWorks distribution has L2 and L3 instruction cache disabled
from the factory, with a note such as "per Wind River support" in the
config.h Motorola header file where it's disabled. I've enabled
instruction cache without noting any problems. Timings are
particularly bad before that.
2. The vxWorks header files are ANTIQUE, and my client just bought it
last month! gcc is "2.96". The claimed POSIX compliance is circa
1995 and even then it doesn't come close. The networking headers are
particularly bad. I had to put a whole lot of work-arounds in to get
code that runs on Linux, FreeBSD, RTEMS, Cygwin, and OS/X to work on
the vxWorks "POSIX" interface.
3. With cache enabled the context switch times are still surprisingly
slow on vxWorks. I'll quantify "a lot" when I can dig up some notes,
but my biggest surprise was the time between the interrupt semaphore
post in my ISR and when I was finally in the high-priority task (set
at PRIOMAX) activated by that semaphore.
4. My networking code needed tweeking to get it to work reliably and
fast on vxWorks. I never could get receives to work in a timely
fashion unless the receiving end knew ahead of time how much to
receive, even setting the low watermarks, send and receive buffer
sizes, etc wouldn't get things working well. I finally gave up and
started sending a fixed minimum packet size that told what would come
in the next packet if it didn't fit in the first one. The same code
is run in all places so that isn't a variable, it probably improves
it a bit everywhere.
I'm running a multi-axis control application and I've been doing life
testing with both RTEMS and vxWorks at the following rates. I'm
using my own drivers for analog I/O. The first number is RTEMS, the
second vxWorks. I arrived at these numbers by detecting analog input
overruns and then backing off a bit and also being sure the UDP data
is coming up OK (I live with occasional drops).
Data acquisition interrupt (copies data off PCI board, posts
semaphore): 22KHz 15KHz
Inner loop (fields semaphore, does inner loop, outputs to DACs,
safety checks, periodically posts semaphore for outer loop, assembles
data for UDP upload, posts upload semaphore): 22KHz 15KHz
Outer loop (Fields semaphore, does outer loop processing, profiles
motions): 2.2KHz 1.875KHz
IP Command exerciser: (Receives new position command, returns
status) 50Hz 50Hz
Command monitor: Runs when a command is typed in.
UDP Data Upload task: Runs continuously when it can, priority set
below system network tasks.
System tasks: The network tasks, and whatever else is started up out-
of-the-box.
Finally, let me note that this was a good exercise - the changes I
made to get vxWorks faster (networking changes, custom versions of
semaphores with timeouts since there is no sem_timedwait and POSIX
timers are of little use on vxWorks) also folded into RTEMS and
improved my life test rate there from 20KHz to 22KHz.
Peter
More information about the users
mailing list