IMFS on ram-disk

Joel Sherrill joel at rtems.org
Tue Jul 24 14:46:22 UTC 2018


On Tue, Jul 24, 2018 at 9:24 AM, Udit agarwal <dev.madaari at gmail.com> wrote:

>
>
> On Jul 24, 2018 7:45 PM, "Gedare Bloom" <gedare at rtems.org> wrote:
>
> On Tue, Jul 24, 2018 at 10:12 AM, Gedare Bloom <gedare at rtems.org> wrote:
> > On Tue, Jul 24, 2018 at 4:48 AM, Udit agarwal <dev.madaari at gmail.com>
> wrote:
> >> Hi all,
> >> I've been trying to use IMFS with a ramdisk on BBB. Here's what i did:
> >>
> >> After creating a ram disk (block size:512, block count:262144), i used
> >> mount_and_make_target_path("/dev/rda","/mnt",RTEMS_FILESYSTE
> M_TYPE_IMFS,RTEMS_FILESYSTEM_READ_WRITE,NULL)
> >>
> >> There is no error during bootup, but when allocating a file(in the /mnt
> >> directory) of size 10MB(or even of 1MB) it gives a file too large error.
> >> I've used similar config for RFS too and that did worked, so probably
> >> there's no issue while setting up the RAM disk.
> >>
> >> I've also tried testsuite/fstests/imfs_support setup, but looks like
> similar
> >> method doesn't work with ram-disks
> >
> > What did you get back from mount_and_make_target_path()? Did it
> > succeed? I'm pretty sure this method should not be used for imfs, see
> > fstests/imfs_support/fs_support.c
> >
>
> Oh, I take that back. Sebastian's comment is accurate. You can
> probably make your approach work by replacing "/dev/rda" with NULL I
> would guess. The IMFS doesn't need a source, since it is backed by the
> heap.
>
> Yes, Sebastian's approach worked. I'm able to generate some benchmarking
> stats on IMFS too.
>

I would expect the tuning parameter CONFIGURE_IMFS_MEMFILE_BYTES_PER_BLOCK
to have a significant impact on the performance of the IMFS.  The default
size is 128.
There is a block comment in cpukit/libfs/src/imfs/imfs.h which shows the
possible values.
Since the file blocks are malloc'ed, the blocks default small-ish to avoid
wasting
memory for small files.

The IMFS uses an inode structure based on that of the original UNIX
filesystem.
So as the block size goes up, you can have more and more blocks in the
maximum file size.

*  The data structure for the in-memory "memfiles" is based on classic UNIX.
 *
 *  block_ptr is a pointer to a block of IMFS_MEMFILE_BYTES_PER_BLOCK in
 *  length which could be data or a table of pointers to blocks.
 *
 *  Setting IMFS_MEMFILE_BYTES_PER_BLOCK to different values has a
significant
 *  impact on the maximum file size supported as well as the amount of
 *  memory wasted due to internal file fragmentation.  The following
 *  is a list of maximum file sizes based on various settings
 *
 *  @code
 *    max_filesize with blocks of   16 is         1,328
 *    max_filesize with blocks of   32 is        18,656
 *    max_filesize with blocks of   64 is       279,488
 *    max_filesize with blocks of  128 is     4,329,344
 *    max_filesize with blocks of  256 is    68,173,568
 *    max_filesize with blocks of  512 is 1,082,195,456


If a system has lots of memory and wants to store larger files with lower
malloc
overhead, then the default size should be increased.

NOTE: The IMFS silently ignores requests which are not power of
2 and are < 16 or greater than 512. See IMFS_determine_bytes_per_block()
and its use in imfs_initsupp.c.

For the purposes of benchmarking and since the 512 upper
block size (plus implied max file size) was determined so long
ago that a 1G RAM file seemed impossible, it probably makes sense
to let people configure up to 4K or 8K as the IMFS bytes per block.

And we should consider whether mis-configuring the bytes per block
in the IMFS should result in a silent defaulting or a fatal error. I know
I was surprised the last time I ran into this.


>
> >> --
> >> Regards,
> >> Udit kumar agarwal
> >> http://uditagarwal.in/
> >>
> >> _______________________________________________
> >> users mailing list
> >> users at rtems.org
> >> http://lists.rtems.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> users mailing list
> users at rtems.org
> http://lists.rtems.org/mailman/listinfo/users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/users/attachments/20180724/1088d13c/attachment-0002.html>


More information about the users mailing list