POSIX Async IO (GSoC)

Wei Shen cquark at gmail.com
Wed Apr 2 16:14:12 UTC 2008


Hi,

I studied the implementation of Glibc. Below I list some elementary
implementation thoughts. Comments are solicited.

Seems that AIO is not so simple a task as I have thought, especially in
that:
(1) AIO requests have different priorities (defined by  aiocb.aio_reqprio)
This can be realized by: a) using one task queue for each priority; or b)
inserting a new task to the right position in the queue according to its
priority (O(lgN) time complexity required).

(2) aio_cancel should support cancel all the AIO tasks on a FD.
This implies one task queue for each FD, or else this call would be
time-consuming.

(3) Do we need to consider optimization when the worker thread select a task
from a task queue.
For example if there is a task belongs to one FD still in processing, then
we consider tasks of other FDs first. Glibc seems to use one worker thread
for each FD.

We also need to support two kinds of notification - signal (SIGEV_SIGNAL)
and callback (SIGEV_THREAD), immediate stutus query (aio_error), and
blocking on an AIO task (aio_suspend).

It would be useful to abstract a general task pool that supports various
async tasks besides AIO. Such a task pool should include:

(1) a number of task queues, and an interface for users to create and
locates a task queue based on a specific attribute of tasks (e.g. FD
attribute of AIO tasks).

(2) user defined task processing handler (one hander per pool or per task
queue or per task?), and callback per task (maybe better to leave this to
the handler)?
For the case of AIO, sharing one handler in the whole pool is enough.

(3) priority based task scheduling.

(4) task enqueuing, canceling (matching by specific attributes), status
query, blocked waiting.

(5) resource management/configuration/statistics
Including:
a) max number of worker threads, thread stack size, thread priority;
b) max number of task queues in the pool;
c) max number of tasks in the pool or in each queue;
d) dynamic adjusting of worker threads.

So, quite a little work to build a good solution ...

Thanks,
Wei Shen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/users/attachments/20080403/48c5100a/attachment.html>


More information about the users mailing list