problems with RFS filesystem and mkdir

Cudmore, Alan P. (GSFC-5820) alan.p.cudmore at nasa.gov
Wed Mar 20 20:03:20 UTC 2013


It must have something to do with the target.
I ran this example on sparc-sis and m68k-uC5282 using a 32k file system allocation and it worked.

My only 4.11 setup right now is for my Raspberry Pi BSP at home. Eventually I will be able to try it there.

Alan

On Mar 20, 2013, at 9:30 AM, Matthew J Fletcher <amimjf at gmail.com<mailto:amimjf at gmail.com>> wrote:

Hi,

I wonder if this is target specific now, if i literately use the sources you posted i get an error '-1' returned from the rtems_mkdir().

The only change i made was to malloc a smaller chuck, 32k. Of course my malloc() is operating of the same SRAM chip, but it was in a different address range.

Not sure what to do now, i guess i have to work out whats up with RFS on ARM thumb. Does anyone know what platforms RFS was tested/developed on ?



On 19 March 2013 21:36, Cudmore, Alan P. (GSFC-5820) <alan.p.cudmore at nasa.gov<mailto:alan.p.cudmore at nasa.gov>> wrote:
No problem. I have a pretty easy to use 4.10 setup right now.

I did have to run the memset code for the address space. If you pass a 0 in the cold_boot parameter, it fails.

Alan

On Mar 19, 2013, at 5:21 PM, Matthew J Fletcher <amimjf at gmail.com<mailto:amimjf at gmail.com>> wrote:


Alan,

Thanks very much for running this test, its given me quite a bit of confidence.

Did you have to memset the address space to 0xff before you could register/format the device ?, I found I had to do that to not get an error code.

I suppose the memset would rapidly throw up problems if I had addresses wrong. I can't think what the issue might be. I will try a mallocd area like your example though to see if I get a good result.

On 19 Mar 2013 21:05, "Cudmore, Alan P. (GSFC-5820)" <alan.p.cudmore at nasa.gov<mailto:alan.p.cudmore at nasa.gov>> wrote:
I took your code below and created an example that would build and run on my 4.10.2 setup. I ran this on the sparc-sis simulator, and it seems to work fine.

Are you sure you have valid memory for the nvramdisk device?

We rely on RFS on 4.10.2 and are giving a pretty good stress test right now.

Alan

------------------------------

#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/stat.h>
#include <fcntl.h>

#include <rtems.h>
#include <rtems/bdbuf.h>
#include <rtems/blkdev.h>
#include <rtems/diskdevs.h>
#include <rtems/error.h>
#include <rtems/flashdisk.h>
#include <rtems/fsmount.h>
#include <rtems/rtems-rfs.h>
#include <rtems/rtems-rfs-format.h>
#include <rtems/nvdisk-sram.h>

uint32_t    rtems_nvdisk_configuration_size =1;
const char* sram_driver = "/dev/nvda"; // RTEMS_NVDISK_DEVICE_BASE_NAME, 1st device
const char* sram_path = "/sram";


/*
 * Let the IO system allocation the next available major number.
*/
#define RTEMS_DRIVER_AUTO_MAJOR (0)

/*
 * The SRAM Device setup
*/
rtems_nvdisk_device_desc rtems_sram_device_descriptor[] =
{
    {
        flags:  0,
        base:   0,
        size:   256 * 1024, // 256K (Adjust when ROM.ld, _IMFS_DiskSize is changed)
        nv_ops: &rtems_nvdisk_sram_handlers
    }
};

const rtems_nvdisk_config rtems_nvdisk_configuration[] =
{
    {
        block_size:         512,
        device_count:       1,
        devices:            &rtems_sram_device_descriptor[0],
        flags:              0,
        info_level:         0
    }
};

/**
 * Create the SRAM Disk Driver entry.
 */
rtems_driver_address_table rtems_sram_ops = {
    initialization_entry: rtems_nvdisk_initialize,
    open_entry:           rtems_blkdev_generic_open,
    close_entry:          rtems_blkdev_generic_close,
    read_entry:           rtems_blkdev_generic_read,
    write_entry:          rtems_blkdev_generic_write,
    control_entry:        rtems_blkdev_generic_ioctl
};


void setup_sram_disk (int cold_boot)
{
    rtems_device_major_number major;
    rtems_status_code         sc;

    // settings
    rtems_sram_device_descriptor[0].base = (uint32_t *) malloc (256*1024);

    if (cold_boot)
    {
        memset(rtems_sram_device_descriptor[0].base, 0xff, rtems_sram_device_descriptor[0].size );
    }

    /*
    * Register the NV Disk driver.
    */
    sc = rtems_io_register_driver (RTEMS_DRIVER_AUTO_MAJOR, &rtems_sram_ops, &major);
    if ( sc < 0 )
    {
       printf("RTEMS Register IO driver failed\n");
    }
    else
    {
       printf("RTEMS register IO driver OK\n");
    }
}


/*
 *
 * RFS on SRAM
 *
 */

void mount_rfs_on_sram(int cold_boot)
{
    rtems_rfs_format_config config;
    rtems_status_code         rc;

    setup_sram_disk (cold_boot);

    // zero is a good set of defaults
    memset (&config, 0, sizeof (rtems_rfs_format_config));

    if (rtems_rfs_format (sram_driver, &config) < 0)
    {
        printf("error: format of %s failed: %s\n", sram_driver, strerror (errno));
    }
    else
    {
       printf("RTEMS RFS format OK\n");
    }

    if (mount_and_make_target_path(
        sram_driver,
        sram_path,
        RTEMS_FILESYSTEM_TYPE_RFS,
        RTEMS_FILESYSTEM_READ_WRITE,
        NULL
      ) != 0)
    {
        printf("error: mount of %s to %s failed: %s\n", sram_driver, sram_path, strerror (errno));
    }
    else
    {
       printf("mount and make target path OK\n");
    }

    rc = rtems_mkdir("sram/DB", S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH);
    if ( rc < 0 )
    {
       printf("rtems_mkdir failed\n");
    }
    else
    {
       printf("mkdir sdram/DB OK\n");
    }
}

rtems_task Init(
  rtems_task_argument ignored
)
{
  printf( "\n\n*** Starting NVRAM Disk Test ***\n" );
  mount_rfs_on_sram(1);
  printf( "*** END OF TEST ***\n" );
  exit( 0 );
}

#define CONFIGURE_EXTRA_TASK_STACKS         (CONFIGURE_MAXIMUM_TASKS * RTEMS_MINIMUM_STACK_SIZE)

#define CONFIGURE_MAXIMUM_DRIVERS           6
/* NOTICE: the clock driver is explicitly disabled */
#define CONFIGURE_APPLICATION_DOES_NOT_NEED_CLOCK_DRIVER
#define CONFIGURE_APPLICATION_NEEDS_CONSOLE_DRIVER
#define CONFIGURE_DISABLE_CLASSIC_API_NOTEPADS
#define CONFIGURE_MAXIMUM_TASKS 40                // TX MAX_THREAD
#define CONFIGURE_MAXIMUM_TIMERS 40                // TX MAX_TIMER
#define CONFIGURE_MAXIMUM_SEMAPHORES 40            // TX MAX_MUTEX
#define CONFIGURE_MAXIMUM_MESSAGE_QUEUES 40        // TX MAX_MSG_QUEUE
#define CONFIGURE_MAXIMUM_PARTITIONS 5
#define CONFIGURE_MAXIMUM_REGIONS 5
#define CONFIGURE_MAXIMUM_PORTS 0
#define CONFIGURE_MAXIMUM_PERIODS 0
#define CONFIGURE_MAXIMUM_BARRIERS 0

#define CONFIGURE_MICROSECONDS_PER_TICK   10000 /* 10 milliseconds */
#define CONFIGURE_TICKS_PER_TIMESLICE       50 /* 50 milliseconds */

#define CONFIGURE_APPLICATION_NEEDS_LIBBLOCK
#define CONFIGURE_BDBUF_MAX_READ_AHEAD_BLOCKS  2
#define CONFIGURE_BDBUF_MAX_WRITE_BLOCKS       8
#define CONFIGURE_SWAPOUT_TASK_PRIORITY        15
#define CONFIGURE_USE_IMFS_AS_BASE_FILESYSTEM
#define CONFIGURE_FILESYSTEM_RFS
#define CONFIGURE_LIBIO_MAXIMUM_FILE_DESCRIPTORS 10

#define CONFIGURE_RTEMS_INIT_TASKS_TABLE
#define CONFIGURE_INIT
#include <rtems/confdefs.h>

-------------------------------



On Mar 18, 2013, at 2:35 PM, Matthew J Fletcher <amimjf at gmail.com<mailto:amimjf at gmail.com>> wrote:

Hi,


Alas the mkdir() is the first filesystem operation i am doing. I am not sure if i can send attachments to the list so i've cut and pasted the relivant functions below. I guess the following would run on any BSP, its just a 256k block of sram. If there is a BSP with enough memory the base pointer for the filesystem could just be a big malloc().

------------------- snip -------------------------


#define CONFIGURE_IDLE_TASK_BODY rtems_idle
#define CONFIGURE_EXTRA_TASK_STACKS         (CONFIGURE_MAXIMUM_TASKS * RTEMS_MINIMUM_STACK_SIZE)

#define CONFIGURE_MAXIMUM_DRIVERS           6

/* NOTICE: the clock driver is explicitly disabled */
#define CONFIGURE_APPLICATION_DOES_NOT_NEED_CLOCK_DRIVER
#define CONFIGURE_APPLICATION_NEEDS_CONSOLE_DRIVER

#define CONFIGURE_DISABLE_CLASSIC_API_NOTEPADS
#define CONFIGURE_MAXIMUM_TASKS 40                // TX MAX_THREAD
#define CONFIGURE_MAXIMUM_TIMERS 40                // TX MAX_TIMER
#define CONFIGURE_MAXIMUM_SEMAPHORES 40            // TX MAX_MUTEX
#define CONFIGURE_MAXIMUM_MESSAGE_QUEUES 40        // TX MAX_MSG_QUEUE
#define CONFIGURE_MAXIMUM_PARTITIONS 5
#define CONFIGURE_MAXIMUM_REGIONS 5
#define CONFIGURE_MAXIMUM_PORTS 0
#define CONFIGURE_MAXIMUM_PERIODS 0
#define CONFIGURE_MAXIMUM_BARRIERS 0

#define CONFIGURE_MICROSECONDS_PER_TICK   10000 /* 10 milliseconds */
#define CONFIGURE_TICKS_PER_TIMESLICE       50 /* 50 milliseconds */

#define CONFIGURE_APPLICATION_NEEDS_LIBBLOCK
#define CONFIGURE_BDBUF_MAX_READ_AHEAD_BLOCKS  2
#define CONFIGURE_BDBUF_MAX_WRITE_BLOCKS       8
#define CONFIGURE_SWAPOUT_TASK_PRIORITY        15
#define CONFIGURE_USE_IMFS_AS_BASE_FILESYSTEM
#define CONFIGURE_FILESYSTEM_RFS
#define CONFIGURE_LIBIO_MAXIMUM_FILE_DESCRIPTORS 10

#define CONFIGURE_RTEMS_INIT_TASKS_TABLE
#define CONFIGURE_INIT
#include <rtems/confdefs.h>


uint32_t rtems_flashdisk_configuration_size =1;
const char* flash_driver = "/dev/fdda";    // RTEMS_FLASHDISK_DEVICE_BASE_NAME, 1st device
const char* flash_path = "/flash";

uint32_t rtems_nvdisk_configuration_size =1;
const char* sram_driver = "/dev/nvda"; // RTEMS_NVDISK_DEVICE_BASE_NAME, 1st device
const char* sram_path = "/sram";


/**
 * Let the IO system allocation the next available major number.
 */
#define RTEMS_DRIVER_AUTO_MAJOR (0)

#define FLASHDISK_SEGMENT_COUNT 8 /* 128*/    // Spansion S29GL128S, 128Mbit part
#define FLASHDISK_SEGMENT_SIZE (128 * 1024)    // Spansion S29GL128S, 128k sized sectors
#define FLASHDISK_BLOCK_SIZE 512
#define FLASHDISK_BLOCKS_PER_SEGMENT (FLASHDISK_SEGMENT_SIZE / FLASHDISK_BLOCK_SIZE)
#define FLASHDISK_SIZE (FLASHDISK_SEGMENT_COUNT * FLASHDISK_SEGMENT_SIZE)


/**
 * The SRAM Device setup
 */
rtems_nvdisk_device_desc rtems_sram_device_descriptor[] =
{
    {
        flags:  0,
        base:   0,
        size:   256 * 1024, // 256K (Adjust when ROM.ld, _IMFS_DiskSize is changed)
        nv_ops: &rtems_nvdisk_sram_handlers
    }
};

const rtems_nvdisk_config rtems_nvdisk_configuration[] =
{
    {
        block_size:         512,
        device_count:       1,
        devices:            &rtems_sram_device_descriptor[0],
        flags:              0,
        info_level:         0
    }
};




/**
 * Create the SRAM Disk Driver entry.
 */
rtems_driver_address_table rtems_sram_ops = {
    initialization_entry: rtems_nvdisk_initialize,
    open_entry:           rtems_blkdev_generic_open,
    close_entry:          rtems_blkdev_generic_close,
    read_entry:           rtems_blkdev_generic_read,
    write_entry:          rtems_blkdev_generic_write,
    control_entry:        rtems_blkdev_generic_ioctl
};



int setup_sram_disk (int cold_boot)
{
    rtems_device_major_number major;
    rtems_status_code         sc;

    // settings
    rtems_sram_device_descriptor[0].base = (unsigned long)eTalus_IMFS_start;

    if (cold_boot)
    {
        memset(rtems_sram_device_descriptor[0].base, 0xff, rtems_sram_device_descriptor[0].size );
    }

    /*
    * Register the NV Disk driver.
    */
    sc = rtems_io_register_driver (RTEMS_DRIVER_AUTO_MAJOR, &rtems_sram_ops, &major);
}


/*
 *
 * RFS on SRAM
 *
 */

void mount_rfs_on_sram(int cold_boot)
{
    rtems_rfs_format_config config;

    setup_sram_disk (cold_boot);

    // zero is a good set of defaults
    memset (&config, 0, sizeof (rtems_rfs_format_config));

    if (rtems_rfs_format (sram_driver, &config) < 0)
    {
        sprintf(inital_print_buffer,"error: format of %s failed: %s\n", sram_driver, strerror (errno));
        UART0_SendStr(inital_print_buffer);
    }

    if (mount_and_make_target_path(
        sram_driver,
        sram_path,
        RTEMS_FILESYSTEM_TYPE_RFS,
        RTEMS_FILESYSTEM_READ_WRITE,
        NULL
      ) != 0)
    {
        sprintf(inital_print_buffer,"error: mount of %s to %s failed: %s\n", sram_driver, sram_path, strerror (errno));
        UART0_SendStr(inital_print_buffer);
    }

    rc = rtems_mkdir("sram/DB", S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH);
}



------------------- end ------------------------





On 18 March 2013 17:42, Joel Sherrill <joel.sherrill at oarcorp.com<mailto:joel.sherrill at oarcorp.com>> wrote:
Is this reproducible with something you can share?

Looking at the source, I think this case is that you
filled the filesystem and there were no free blocks.
But I am looking at the head and not that familiar
with this filesystem.

--joel

On 3/18/2013 12:20 PM, Matthew J Fletcher wrote:
Hi,

I have an RFS filesystem using the nvram device, its formatted ok and mount_and_make_target_path() also worked ok. But attempting to mkdir() causes a really lowlevel issue.

The callstack;

rtems_rfs_block_map_find() at rtems-rfs-block.c:246 0x81198bd8
rtems_rfs_block_map_next_block() at rtems-rfs-block.c:353 0x81198d1e
rtems_rfs_dir_lookup_ino() at rtems-rfs-dir.c:204 0x8119951c
rtems_rfs_rtems_eval_for_make() at rtems-rfs-rtems.c:440 0x8118e142
IMFS_evaluate_for_make() at imfs_eval.c:435 0x81190718
mknod() at mknod.c:64 0x81183d5c
mkdir() at mkdir.c:29 0x81183cf0
build() at rtems_mkdir.c:101 0x811843a0
rtems_mkdir() at rtems_mkdir.c:136 0x811843a0


the rtems_rfs_block_pos_block_past_end macro is failing and causing the return of ENXIO.

The two arguments to the macro are;

bpos->bno = 1
bpos->boff = 0
bpos->block = 0

map->size->count = 1
map->size->offset = 0

Which seems to me the cause of ENXIO being returned, but i dont know how my BSP could affect this.

Do people use RFS in production systems ? i am using the 4.10.2 sources.


regards
---
Matthew J Fletcher



--
Joel Sherrill, Ph.D.             Director of Research & Development
joel.sherrill at OARcorp.com<mailto:joel.sherrill at OARcorp.com>        On-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
Support Available                (256) 722-9985




--

regards
---
Matthew J Fletcher

_______________________________________________
rtems-users mailing list
rtems-users at rtems.org<mailto:rtems-users at rtems.org>
http://www.rtems.org/mailman/listinfo/rtems-users





--

regards
---
Matthew J Fletcher


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/users/attachments/20130320/bdcae017/attachment.html>


More information about the users mailing list