Unlinking large files in IMFS
Joel Sherrill
joel.sherrill at OARcorp.com
Fri Mar 23 11:44:07 UTC 2001
Andy Dachs wrote:
>
> Thanks, adding if(!p) break; does the trick.
>
> Out of interest these are the tentative figures I got with a modified version of the iozone benchmark on my eval board (MPC8260, I & D caches off, 40MHz system clock, 1Mbyte file):
>
> IOZONE performance measurements: 624152 bytes/second for writing the file 2330168 bytes/second for reading the file
Is this with the default block size? I would be curious to see the
impact of
increasing it. It should lower the relative overhead of scanning the
block lists.
> Andy
>
> --- memfile.c Fri Mar 23 09:54:56 2001
> ***************
> *** 509,514 ****
> --- 509,516 ---- if ( info->triply_indirect ) { for ( i=0 ; i<IMFS_MEMFILE_BLOCK_SLOTS ; i++ ) { p = (block_p *) info->triply_indirect[i];
> + if( !p )
> + break; for ( j=0 ; j<IMFS_MEMFILE_BLOCK_SLOTS ; j++ ) { if ( p[j] ) { memfile_free_blocks_in_table( (block_p **)&p[j], to_free);
>
> ----- Original Message -----
> From: Joel Sherrill <joel.sherrill at OARcorp.com>
> To: Andy Dachs <iwe at fsmail.net>
> CC: rtems-users at oarcorp.com
> Sent: Thu, 22 Mar 2001 19:56:48 +0000 (GMT+00:00)
> Subject: Re: Unlinking large files in IMFS
>
> > Andy Dachs wrote:
> > >
> > > I just updated my MPC8260 BSP to work with the rtems-ss-20010126 snapshot. Its still a little rough around the edges but I came across a problem unlinking large files which I don't think is related to my BSP. I saw similar symptoms with the beta3a release but never investigated fully.
> > >
> > > Attached is a small test routine to demonstrate the problem. Modify the NUM_BLOCKS value to get different file sizes (a value of 264 or above should generate the assertion).
> > >
> > > Writing a 128k file and then calling unlink() is fine:
> > >
> > > *** CRASH TEST, dummy? ***
> > > Writing file of 131584 bytes
> > > Written file
> > > Now trying to unlink it
> > > Success
> > >
> > > Going to 135168 bytes generates an error:
> > >
> > > *** CRASH TEST, dummy? ***
> > > Writing file of 135168 bytes
> > > Written file
> > > Now trying to unlink it
> > > assertion "0" failed: file "../../../../../../rtems-ss-20010126-afd/c/src/lib/libc/malloc.c", line 323
> > >
> > > Just below that number is fine again:
> > >
> > > *** CRASH TEST, dummy? ***
> > > Writing file of 134656 bytes
> > > Written file
> > > Now trying to unlink it
> > > Success
> > >
> > > I should have enough heap (4M) so I don't think that's a problem. By my calculations 135168 bytes is the magic number where triply indirect blocks would be required (for a filesystem using 128 byte blocks) so could it be related to that?
> >
> > Definitely. There is a different loop for singly, doubly and
> > triply indirect blocks. I think it is quite simple but you
> > should test this and let me know.
> >
> >
> >
> > bash$ cvs diff -c memfile.c
> > Index: memfile.c
> > ===================================================================
> > RCS file: /usr1/CVS/rtems/c/src/libfs/src/imfs/memfile.c,v
> > retrieving revision 1.13
> > diff -c -r1.13 memfile.c
> > *** memfile.c 2001/01/22 14:05:14 1.13
> > --- memfile.c 2001/03/22 20:13:50
> > ***************
> > *** 509,514 ****
> > --- 509,516 ---- if ( info->triply_indirect ) { for ( i=0 ; i<IMFS_MEMFILE_BLOCK_SLOTS ; i++ ) { p = (block_p *) info->triply_indirect[i];
> > + if ( !p )
> > + break; for ( j=0 ; j<IMFS_MEMFILE_BLOCK_SLOTS ; j++ ) { if ( p[j] ) { memfile_free_blocks_in_table( (block_p **)&p[j], to_free);
> >
> >
> > --
> > Joel Sherrill, Ph.D. Director of Research & Development
> > joel at OARcorp.com On-Line Applications Research
> > Ask me about RTEMS: a free RTOS Huntsville AL 35805 Support Available (256) 722-9985
>
> _______________________________________________________________________
> FSmail - Get your free web-based email from Freeserve: www.fsmail.net
--
Joel Sherrill, Ph.D. Director of Research & Development
joel at OARcorp.com On-Line Applications Research
Ask me about RTEMS: a free RTOS Huntsville AL 35805
Support Available (256) 722-9985
More information about the users
mailing list