RFS data loss for open files and ways to work around it
Chris Johns
chrisj at rtems.org
Wed May 11 00:01:07 UTC 2016
On 09/05/2016 15:32, Mohammed Saeed Khoory wrote:
> Hi,
>
> I'm using RFS on a RAMDisk for a LEON3-based system. My program tends to keep files open while writing to them periodically. I've noticed that if the system was reset, the files that were open lose all of the data that was written since they were last open. The files still exist in the filesystem, just with a size of 0. Interestingly however, the free space blocks used by those writes are not reclaimed and is gone, and the only way to get it back from what I can tell is to reformat. An fsck-style tool would probably work too but that doesn't exist.
>
> The way I work around this is that instead of keeping files open, I open the file, write a little, then close instantly. The changes are then preserved in the filesystem if a reset occurs. However, this method is rather awkward, and I was wondering if there's a better way of doing it. I've tried using fflush() and sync(), but that didn't help. Is there a proper way to "synchronize" the writes to the filesystem?
>
I seem to remember duplicating the fd with a dup2 call (I think) and
then closing the new fd should help. This flushes out the libc parts.
The RFS does not hold any unwritten data, it is pushed to the cache. You
will still need to sync the block cache layer. See the 'blksync' command
in the shell for code on how to do this.
As Sebastian points out there is a limit to what can be achieved as the
RFS does not have a journal. How this effects your systems is something
you will need to determine.
Chris
More information about the users
mailing list