libblock performance question
Eugeny S. Mints
Eugeny.Mints at oktet.ru
Tue Oct 15 08:19:11 UTC 2002
On Mon, 14 Oct 2002, Till Straumann wrote:
> Joel Sherrill wrote:
>
> >
> >Till Straumann wrote:
> >
> >>Eugeny S. Mints wrote:
> >>
> >>>On Mon, 14 Oct 2002, Joel Sherrill wrote:
> >>>
> >>>>Till Straumann wrote:
> >>>>
> >>>>>Hi.
> >>>>>
> >>>>>I am thinking about using libblock for implementing an NFS client.
> >>>>>However, I am a little bit concerned about libblock using task
> >>>>>preemption disabling. Code that is executed with preemption
> >>>>>disabled is not trivially short - therefore, I am a bit concerned that
> >>>>>using libblock might degrade dispatching latency for high priority tasks
> >>>>>(who possibly are not using a FS or libblock whatsoever).
> >>>>>
> >>>>>Can anybody resolve my doubts?
> >>>>>
> >>>>I didn't do any extensive algorithmic analysis but I don't see anything
> >>>>particularly alarming. There were a handful of DISABLE_PREEMPTION
> >>>>macro invocations in bdbuf.c. Some protected AVL tree operations which
> >>>>should not be O(n) but more like O(log2 n) so that isn't very bad.
> >>>>You can have 255 blocks to manage and still only do 8 "operations."
> >>>>
> >>>>The other path seems to be freeing buffers and that is no worse
> >>>>than any RTEMS internal critical section -- chain append + semaphore
> >>>>release.
> >>>>
> >>>>Do you see any O(n) loops?
> >>>>
> >>Hmmm - it's just that protecting stuff by disabling preemption makes me
> >>nervous (Joel: remember the malloc info incident?). IMHO, task preemption
> >>should be avoided and only used for limited stretches of code. Is there
> >>any good reason for _not_ using semaphore/mutex protection in libblock?
Unfortunately I think there is :(. I can't right now give
clear details, but I'm afraid that dead locks problem can't be
solve easy with semaphore/mutex protection. Again, it seems
for us that current preemption disable is not dangerous, but
ofcourse we also understand that preemption disable should
be used ONLY in the cases it is really needed to escape
unnecessary latencies.
> >>
> >
> >I asked that in an earlier response. My impression was that the disable
> >of preemption did not appear to be too long but using a mutex would be
> >better.
> >
> >>Also, if I don't oversee something, it seems to me that just disabling task
> >>preemption for mutual exclusion is not really SMP safe - is it?
> >>
> >
> >It could be but it depends upon the synchronization between the CPUs.
> >In the RTEMS MP model, it would be safe.
> >
> So different CPUs live in different address spaces - only one CPU would
> effectively
> execute libblock? Sorry - I'm not really familiar with the RTEMS MP
> model, as I just
> realize...
>
> >
> >>Then: the code sections with task preemption disabled are not limited to
> >>libblock/bdbuf but extend to the disk driver ioctl() -- who knows what
> >>the disk
> >>driver implementor does in his ioctl() - all with exclusive access
> >>(except for
> >>interrupts) to one CPU ?? Of course, I don't understand the last details
> >>of the code but it seems to be safe to enable preemption around the ioctl()
> >>calls.
> >>
> >
> >I missed the ioctl() so that is an open hole. Where is that call?
> >
>
> One is in rtems_bdbuf_read(), the other one is done from the swapout task
> (which is in non-preemptible mode).
I don't agree that such ioctls are holes. THese ioctls is
only BLKIO_REQUEST ioctls which SHOULD be very short and
non-blocking - this ioctl request should only put request into the
queue and return. I agree that this assumption should be
written in the libblock documntation:) and special attention
should be paied to it.
But also it seems that currently libblock
mainly will be used with ATA driver which satisfies to this
assumption. Also because ATA driver is independant from any
particular IDE chip driver (ATA driver manages requests
queues by itself) this assumption will be met for each IDE
chip.
>
> BTW: it seems that "find_or_assign_buffer()", when executing the 'goto
> again' branch,
> repeatedly acquires the bufget_sema...
I've looked through - seems everithing is OK.
>
> -- Till
>
>
> >
> >
> >>BTW, Eugeny: It seems that the swapout task releases the disk _before_ it
> >>obtains the transfer_sema - is that OK?
Here it is OK - transfer_sema is attribute of a particular
buffer and is not linked with a particular device (dd in
case of swapout task) . Even if buffer will be
sinchronized after dd is removed nothing dangerous is
happend - buffer will be returned to lru chain that also is
not linked with internal data structures ofany particular
device (disk_device ).
But you remaind me about bug in rtems_bdbuf_get - there
rtems_disk_release(dd) call rigth befor DISABLE_PREEMPTION
is
wrong because pdd pointer which is used later in ioctl may
be null if between rtems_disk_release(dd)
call and DISABLE_PREEMPTION some high priority task removed
both devices (pointed by dd and pdd). This bug may be fixed
by simply moving rtems_disk_release(dd) call below
find_or_assign_buffer(pdd, block, &bd_buf) call.
The same was in rtems_bdbuf_read but I fixed it in this
routine but forgot about rtems_bdbuf_get :(
> >>
> >>I apologize for stirring up these issues but IMO you can never be careful
> >>enough...
> >>
> >
> >No you can't be. Great performance can be fragile. :)
> >
> >>Regards
> >>-- Till
> >>
> >>>I have answered to Till exactly the same yesterday but
> >>>unfortunately it seems the letter doesn't reatch rtems-user
> >>>list even though the address was in CC :( (seems because
> >>>I've answered from not usual e-mail address and list filter
> >>>drop it)
> >>>
> >>>>Looking at this, I don't see why DISABLE_PREEMPTION couldn't be
> >>>>replaced with a mutex though.
> >>>>
> >>>>>Cheers,
> >>>>>--Till.
> >>>>>
> >
>
>
>
--
Eugeny S. Mints
OKTET Ltd.
1 Ulianovskaya st., Petergof, St.Petersburg, 198904 Russia
Phone: +7(812)428-4384 Fax: +7(812)327-2246
mailto:Eugeny.Mints at oktet.ru
More information about the users
mailing list