GSOC Replies

Joel Sherrill joel.sherrill at OARcorp.com
Mon Mar 24 18:02:44 UTC 2008


Wei Shen wrote:
> Hi Joel, Ray, and all,
>  
> I know you are rather busy to reply all the mails, but if it does not 
> bother too much, can I request for some comments on my previous 
> investigation on FIFOFS (see attachment).
>
> Thanks,
> Wei Shen
>  
> On 3/22/08, *Wei Shen* <cquark at gmail.com <mailto:cquark at gmail.com>> wrote:
>  
>
>     O
>
> n 3/21/08, *Joel Sherrill* <joel.sherrill at oarcorp.com 
> <mailto:joel.sherrill at oarcorp.com>> wrote:
>
>
>     I am torn between a dedicated filesystem and adding a filesystem
>     to the
>     IMFS.  The dedicated filesystem makes sense and is cleaner./  If you
>     don't want this capability, don't mount it.
>
>     Make sure about the pipe() and mkfifo() calls.  Careful reading of the
>     opengroup standard may force a decision.  I hope we can get away
>     with the dedicated filesystem.  It sounds cleaner and easier to
>     drop out.
>
>  
> Today, I studied the RTEMS FS and IMFS implementation. Seems hard to 
> implement FIFO as a virtual fs, for the lack of a complete VFS layer, 
> especially a VFS inode abstraction in RTEMS.
>  
> If we want to make FIFOFS entirely independent of the host FS (e.g. 
> IMFS) which a fifo file resides, we would have to:
> 1) define a new data-structure for FIFO file-nodes, which repeats many 
> identical fields (atime, uid, ...) with host FS structures (e.g. jnode 
> in IMFS).
> This is required because otherwise the file handers of FIFOFS can not 
> understand the node_access structure passed to them from the FS interface.
>  
> 2) reimplement a full set of FS handlers.
> Since filenode structure is changed, no handler of the host FS can be 
> used any more.
>  
> 3) hook the host fs (in many places) to add special handling for FIFO 
> files.
>  
> An OS with a VFS inode abstraction needs not to do all these, because 
> FIFOFS can then share the common inode structrure and many FS 
> handlers (i.e. inode operations) with host FSes, and the VFS layer is 
> respossible for maintaining inode attributes. Usually, only needs to 
> hook a few FS routines like mknode and open.
>  
If you implemented this as a special file system type, do you think it 
is feasible
as a GSOC project to extra the common inode processing code into shared 
code? 
I don't know if this makes it a real VFS structure or just a shared set 
of methods
that makes filesystem implementation easier.
>  
> An alternative implementation is to design a FIFO interface to FS and 
> implement part of file handlers demanded (open, write, rmnod, ...). 
> FSes should provide a set of handlers (as IMFS currently done for 
> device files) specific to FIFO files that invoke the FIFO interface 
> and maintains filenode (e.g. jnode) states. Part of the FIFO handlers 
> (that do not alter filenode states) can be directly plugged into the 
> file handler table of the host FSes.
>  
> By so, FIFO can still be encapsulated into a independent module, but 
> only requires a little modification on FS code.
>  
This would be easy and could be a first step if you went to 
refactoring/extracting
common inode processing.
> The open project page suggests: "For unnamed pipes, we could create a 
> named FIFO using a temporary name. ... place them possibly in a 
> special directory". I originally think it is unnecessary, however, 
> also due to the lack of a VFS inode structure, it seems to become a 
> problem, for we have to create a jnode (in the case of IMFS) to 
> preserve file states, and maybe the easiest way to acheive that is 
> creating a IMFS file.
>  
I agree with you and after some thought think it might be worth pursuing
a FIFOFS ONLY if you undertake to extract IMFS code and arrange to share it.

--joel
> Comments are always welcome.
>  
> Thanks,
> Wei Shen


-- 
Joel Sherrill, Ph.D.             Director of Research & Development
joel.sherrill at OARcorp.com        On-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
   Support Available             (256) 722-9985





More information about the users mailing list