[Bug 1848] New: Add driver and file system sync support.

bugzilla-daemon at rtems.org bugzilla-daemon at rtems.org
Sat Jul 23 02:11:49 UTC 2011


https://www.rtems.org/bugzilla/show_bug.cgi?id=1848

           Summary: Add driver and file system sync support.
           Product: RTEMS
           Version: HEAD
          Platform: All
        OS/Version: RTEMS
            Status: NEW
          Severity: enhancement
          Priority: P3
         Component: filesystem
        AssignedTo: chrisj at rtems.org
        ReportedBy: chrisj at rtems.org


Currently there is no API in RTEMS to sync a libblock device given the dev_t
details obtains via a stat of the /dev entry. This PR documents a solution that
allows a user to issue a sync call on a device and have the files open on that
device, the file system, and libblock flushed. The RFS is given as an example
file system.

The RFS has a complex layering of meta-data management to help it achieve the
performance it does. The flow of data is:

user------------------[sync]---------------------+    APP
-|-----------------------------------------------|----
 +-> <fd>                                        |    LIBCSUPPORT
       |                                         |
<sync>-|---------+                               |
  |    |         |                               |
  |    +->[iop]<-+                               |
--|--------|-------------------------------------|----
  |        +>[handle]                            |    RFS
  |           |                                  |
  |           +>[shared]                         |
  |              |                               |
  |              +>[inode]                       |
<sync>              |                            |
  |                 +->[buffer handle]           |
  |                     |                        |
  |                     +->[recent list]<-+      |
--|------------------------|--------------|------|----
  |                        |              |      |    LIBBLOCK
  |                        +->[AVL tree]<-+--+   |
  |                            |             |   |
  |                            +->[swap out]-+   |
--|--------------------------------|-------------|----
  |                                |             |    DRIVER
  |                                +->[driver]<--+
  |                                    |
  +------------------------------------+

When a buffer is release it is held in the recent list in the buffering layer
of the RFS until pressure releases it to libblock. Libblock will hold that
buffer for a configured period of time until it passes it to the swap out
thread which passes it to the driver. If the buffer is requested before it
passes to the driver it is passed back to the file system. I do not think the
timer is reset so the next time the file releases the buffer it should be a
shorter time to get it queued onto a swap out worker thread. Therefore constant
use of a buffer keeps it from the driver as the user take priority.

The RFS maintains the directory inode data in the shared structure until you
close the file. The block bit maps that defines the file mapping on the disk
are not held by the RFS so should be ok and they are slow moving unless you are
writing huge amounts of data and in that case you tend to fill the bit map
quickly.

The conflicting requirements are the need to keep performance high for the case
of many byte writes to a file and the ability to snapshot the data.

A rewrite of the libcsupport for files would provide reference counting on iop
type data as well as locking. I have drawn a user sync call to a device. The
driver would call back into the file system for that device and the file system
would call flush in the libcsupport layer for each file it has opened, flush
internally then sync libblock. The file system would have to hook the driver
directly. I think this can be done in the generic layer of the driver.

Another part of the solution is to enable a snapshot feature in libblock for
file systems. Here the file system is asked to snapshot its meta-data state.
The RFS would need to have add the ability to snapshot the inode data and to
commit it to disk as well as flush the recent queue. This is not a big change.
We would then need to change the swapout thread in libblock to call the file
systems at a configurable rate to snapshot the registered file system.

-- 
Configure bugmail: https://www.rtems.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are watching all bug changes.



More information about the bugs mailing list