splnet() and friends
Eric Norum
eric.norum at usask.ca
Mon May 6 14:56:47 UTC 2002
On Monday, May 6, 2002, at 08:18 AM, Vyacheslav V. Burdjanadze wrote:
Buenas dias.
I found that spl*() routines does nothing in RTEMS (splx, splnet, splimp
especially) but they seem to perform synchronization tasks in original
BSD code. Thus if we have two receiving interface which incapsulation is
IP, the ipintr() will add packets to IP input queue without
locking/unlocking it and under high loads it MAY cause to lost mbufs or
even crash. I didn't observed this condition either but it seems
possible for me. When I looked for splnet the situation is become worse
for me :-/. Can you give some comments on it?
PS:
Using rtems_bsdnet_semaphore for locking everything in networking code
is not good idea - it will reduce maximum bandwith.
One of the primary design goals of a real-time system is that the system
exhibit predictable response times to external events. To meet this
requirement the system should spend an absolute minimum amount of time
running with interrupts disabled. Time spent in interrupt handlers
should also be kept to a minimum. When I ported the BSD network stack
to RTEMS I wanted to minimize the effect the stack had on system
response time so I arranged things so that no network stack code ran in
the context of an interrupt handler. I moved all network device I/O
into receive and transmit daemon RTEMS tasks.
I also had to deal with the throughput/response tradeoff when porting
the stack. BSD kernel semantics require mutual exclusion of tasks
active in the kernel. I provided these semantics through the use of the
single rtems_bsdnet_semaphore mutex. Given the use of this mutex, and
the fact that all network stack code runs in non-interrupt context,
there is no need to provide the finer-grained exclusion provided by the
spl*() routines. No race conditions are introduced by the fact that
spl*() are do nothing in RTEMS.
Use of a single mutex to protect the BSD code may indeed reduce the
maximum throughput of the network stack. In practice though I don't
believe this is a major problem. My test system during the port was
(and is) a 25 MHz MC68360. This is a pretty slow processor by modern
standards, yet it can saturate its 10baseT Ethernet connection when
running the ttcp network timing program.
To summarize, when porting the stack I settled throughput/response
design decisions on the side of response. The effect of these decisions
was to make spl*() unnecessary. The single kernel mutex does not seem
to cause problems in real applications.
--
Eric Norum <eric.norum at usask.ca>
Department of Electrical Engineering
University of Saskatchewan
Saskatoon, Canada.
Phone: (306) 966-5394 FAX: (306) 966-5407
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: text/enriched
Size: 2747 bytes
Desc: not available
URL: <http://lists.rtems.org/pipermail/users/attachments/20020506/ee964a95/attachment-0001.bin>
More information about the users
mailing list