IMFS on ram-disk
Gedare Bloom
gedare at rtems.org
Tue Jul 24 16:34:44 UTC 2018
On Tue, Jul 24, 2018 at 10:46 AM, Joel Sherrill <joel at rtems.org> wrote:
>
>
> On Tue, Jul 24, 2018 at 9:24 AM, Udit agarwal <dev.madaari at gmail.com> wrote:
>>
>>
>>
>> On Jul 24, 2018 7:45 PM, "Gedare Bloom" <gedare at rtems.org> wrote:
>>
>> On Tue, Jul 24, 2018 at 10:12 AM, Gedare Bloom <gedare at rtems.org> wrote:
>> > On Tue, Jul 24, 2018 at 4:48 AM, Udit agarwal <dev.madaari at gmail.com>
>> > wrote:
>> >> Hi all,
>> >> I've been trying to use IMFS with a ramdisk on BBB. Here's what i did:
>> >>
>> >> After creating a ram disk (block size:512, block count:262144), i used
>> >>
>> >> mount_and_make_target_path("/dev/rda","/mnt",RTEMS_FILESYSTEM_TYPE_IMFS,RTEMS_FILESYSTEM_READ_WRITE,NULL)
>> >>
>> >> There is no error during bootup, but when allocating a file(in the /mnt
>> >> directory) of size 10MB(or even of 1MB) it gives a file too large
>> >> error.
>> >> I've used similar config for RFS too and that did worked, so probably
>> >> there's no issue while setting up the RAM disk.
>> >>
>> >> I've also tried testsuite/fstests/imfs_support setup, but looks like
>> >> similar
>> >> method doesn't work with ram-disks
>> >
>> > What did you get back from mount_and_make_target_path()? Did it
>> > succeed? I'm pretty sure this method should not be used for imfs, see
>> > fstests/imfs_support/fs_support.c
>> >
>>
>> Oh, I take that back. Sebastian's comment is accurate. You can
>> probably make your approach work by replacing "/dev/rda" with NULL I
>> would guess. The IMFS doesn't need a source, since it is backed by the
>> heap.
>>
>> Yes, Sebastian's approach worked. I'm able to generate some benchmarking
>> stats on IMFS too.
>
>
> I would expect the tuning parameter CONFIGURE_IMFS_MEMFILE_BYTES_PER_BLOCK
> to have a significant impact on the performance of the IMFS. The default
> size is 128.
> There is a block comment in cpukit/libfs/src/imfs/imfs.h which shows the
> possible values.
> Since the file blocks are malloc'ed, the blocks default small-ish to avoid
> wasting
> memory for small files.
>
> The IMFS uses an inode structure based on that of the original UNIX
> filesystem.
> So as the block size goes up, you can have more and more blocks in the
> maximum file size.
>
> * The data structure for the in-memory "memfiles" is based on classic UNIX.
> *
> * block_ptr is a pointer to a block of IMFS_MEMFILE_BYTES_PER_BLOCK in
> * length which could be data or a table of pointers to blocks.
> *
> * Setting IMFS_MEMFILE_BYTES_PER_BLOCK to different values has a
> significant
> * impact on the maximum file size supported as well as the amount of
> * memory wasted due to internal file fragmentation. The following
> * is a list of maximum file sizes based on various settings
> *
> * @code
> * max_filesize with blocks of 16 is 1,328
> * max_filesize with blocks of 32 is 18,656
> * max_filesize with blocks of 64 is 279,488
> * max_filesize with blocks of 128 is 4,329,344
> * max_filesize with blocks of 256 is 68,173,568
> * max_filesize with blocks of 512 is 1,082,195,456
>
>
> If a system has lots of memory and wants to store larger files with lower
> malloc
> overhead, then the default size should be increased.
>
> NOTE: The IMFS silently ignores requests which are not power of
> 2 and are < 16 or greater than 512. See IMFS_determine_bytes_per_block()
> and its use in imfs_initsupp.c.
>
> For the purposes of benchmarking and since the 512 upper
> block size (plus implied max file size) was determined so long
> ago that a 1G RAM file seemed impossible, it probably makes sense
> to let people configure up to 4K or 8K as the IMFS bytes per block.
>
A 1 GiB RAM is still not that practical in an IMFS.
What would be the maximum file sizes if we did increase it? From the
above, the trend appears to be an increase by a factor of 2^4 each
time, so I'd guess the max file size with blocks of 1024 would be
about 16 GB, 2048 about 256 GB, and 4096 about 4 TB.
There is for sure no practical reason to consider an IMFS to support
file sizes larger than 16 GB for the near future...
> And we should consider whether mis-configuring the bytes per block
> in the IMFS should result in a silent defaulting or a fatal error. I know
> I was surprised the last time I ran into this.
>
Probably.
>>
>>
>> >> --
>> >> Regards,
>> >> Udit kumar agarwal
>> >> http://uditagarwal.in/
>> >>
>> >> _______________________________________________
>> >> users mailing list
>> >> users at rtems.org
>> >> http://lists.rtems.org/mailman/listinfo/users
>>
>>
>>
>> _______________________________________________
>> users mailing list
>> users at rtems.org
>> http://lists.rtems.org/mailman/listinfo/users
>
>
More information about the users
mailing list