libblock and FAT

Chris Johns chrisj at rtems.org
Mon Aug 26 01:33:52 UTC 2013


Claus, Ric wrote:
> I've been writing an SDHCI/MMC driver for use with the Xilinx ZYNQ.  Specifically, it is for use with SD cards and I am using 4 GB cards from Samsung for debugging and testing.  These advertise a read rate ability of up to 24 MB/s.  The average rate that I'm seeing is no where near that and I've traced the cause down to several points:
>
> 1) The ZYNQ's SD/SDIO host controller takes order 650 uS to respond with DATA_AVAILABLE after being sent a read command.  The cause of this may, of course, be the SD card and not the host controller.  However, the same card can be read and written by a Mac much quicker than I can achieve with the ZYNQ and RTEMS.

Read speed and latency are different values. There are also speed 
settings related to the card that may need to be configured in the 
driver. By default cards start slow and the driver can speed up the 
access once the type of card is known.

> 2) I gather that the block size given to rtems_disk_create_phys() should be 512, which is the same as that of the card.  This appears to cause all read and write ioctl sg requests to have a length of 512, thus preventing a device driver from being able to take advantage of a device's ability to handle larger amounts of contiguous data in one go.  In the SD card's case, I'm referring to the read- and write- multiple block command.

The media block size is 512 bytes and is a setting that does not change 
in the case of this media. Multi-block operations are a function of the 
file system and maybe the cache set up (I cannot remember). For example 
a block size at the file system level may not be the same as the media 
block size and the file system should be using the file system block 
size. What is the file system block size ?

> 3) A read request generated with the shell command 'cp /mnt/sd/<file>  ./t.tmp', with<file>  being less than 512 bytes long, generates at least 3 reads of the FAT block and the directory (64 blocks in my case).  I have seen the number of these reads as high as 7, but generally it seems to be 3 or 4.

The cache setup effects what happens here.

>
> Since I haven't found much documentation describing how libblock and libfs should be used, I've been using nvdisk and others in the libblock directory as examples, along with libchip/i2c/spi-sd-card.c.  If I've missed some reading material, could someone please point it out?
>

Does this help ?

http://www.rtems.org/onlinedocs/doxygen/cpukit/html/group__rtems__bdbuf.html

> It looks to me to be a difficult job to figure out how libblock and libfs/FAT work.  Could someone please give me some guidance on how to approach this?

There are a few settings in confdefs.h with some documentation.

> I would like to understand why the FAT and directory need to be read multiple times, and why, if it does, the buffering feature of libblock doesn't seem to be taken advantage of.  Is there perhaps a cache buffer size that's not configured properly somewhere?

Please take a look at confdefs.h.

http://git.rtems.org/rtems/tree/cpukit/sapi/include/confdefs.h#n1234

>
> Since the directory is made up of 64 contiguous blocks, it would be more efficient to read it with one read command.  What can be done to get that to happen?
>

We always welcome performance improvements.

What if you only need to read the first block to find what you want ? 
Reading more can put pressure on the cache else where.

How do you make this general, ie a very large directory ?

Read ahead can be configured ...

http://git.rtems.org/rtems/tree/cpukit/libblock/include/rtems/bdbuf.h#n392

> A number of block device drivers divide the device's block size into the sg length (as length / blocksize rather than (length + blocksize - 1) / blocksize, surprisingly), suggesting that the result could be larger than one.  Why perform this calculation if that's never the case?

I am not sure about the specifics of what you raise. All I can add is 
the dosfs is not the only file system in RTEMS.

Chris



More information about the users mailing list