Optimization issue in RISC-V BSP

Joel Sherrill joel at rtems.org
Fri Jul 28 20:36:44 UTC 2017


Can you check the memory immediately after a download'?

Then after the loop that copies initialized data into place?

I suspect something off there. Could be a linker script issue or the
copy gone crazy.

--joel

On Fri, Jul 28, 2017 at 3:20 PM, Denis Obrezkov <denisobrezkov at gmail.com>
wrote:

> 2017-07-28 22:16 GMT+02:00 Joel Sherrill <joel at rtems.org>:
>
>>
>>
>> On Fri, Jul 28, 2017 at 2:50 PM, Denis Obrezkov <denisobrezkov at gmail.com>
>> wrote:
>>
>>>
>>>>> I can see that during task initialization I have a call:
>>>>>  _Thread_Initialize_information (information=information at entry=0x80000ad4
>>>>> <_RTEMS_tasks_Information>, the_api=the_api at entry=OBJECTS_CLASSIC_API,
>>>>> the_class=the_class at entry=1, maximum=124,
>>>>>     is_string=is_string at entry=false, maximum_name_length=maximum_na
>>>>> me_length at entry=4)
>>>>>
>>>>> And maximum is 124, but I have a configuration parameter:
>>>>> #define CONFIGURE_MAXIMUM_TASKS             4
>>>>>
>>>>
>>>> I can't imagine any standard RTEMS test configuring that many tasks.
>>>> Is there a data corruption issue?
>>>>
>>>> 124 = 0x7c which doesn't ring any bells for me on odd memory issues.
>>>>
>>>> What is the contents of "Configuration_RTEMS_API"?
>>>>
>>> Oh, I change my configuration options a bit, they are:
>>>   #define CONFIGURE_APPLICATION_NEEDS_CLOCK_DRIVER
>>>   #define CONFIGURE_APPLICATION_DISABLE_FILESYSTEM
>>>   #define CONFIGURE_DISABLE_NEWLIB_REENTRANCY
>>>   #define CONFIGURE_TERMIOS_DISABLED
>>>   #define CONFIGURE_LIBIO_MAXIMUM_FILE_DESCRIPTORS 0
>>>   #define CONFIGURE_MINIMUM_TASK_STACK_SIZE 512
>>>   #define CONFIGURE_MAXIMUM_PRIORITY 3
>>>   #define CONFIGURE_DISABLE_CLASSIC_API_NOTEPADS
>>>   #define CONFIGURE_IDLE_TASK_BODY Init
>>>   #define CONFIGURE_IDLE_TASK_INITIALIZES_APPLICATION
>>>   #define CONFIGURE_TASKS 4
>>>
>>>   #define CONFIGURE_MAXIMUM_TASKS             4
>>>
>>>   #define CONFIGURE_UNIFIED_WORK_AREAS
>>>
>>> Also it is the test from a lower ticker example.
>>> Configuration_RTEMS_API with -O0 option:
>>> {maximum_tasks = 5, maximum_timers = 0, maximum_semaphores = 7,
>>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 0,
>>> maximum_ports = 0, maximum_periods = 0,
>>>   maximum_barriers = 0, number_of_initialization_tasks = 0,
>>> User_initialization_tasks_table = 0x0}
>>>
>>> with -Os option:
>>> {maximum_tasks = 124, maximum_timers = 0, maximum_semaphores = 7,
>>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 0,
>>> maximum_ports = 0, maximum_periods = 0,
>>>   maximum_barriers = 0, number_of_initialization_tasks = 0,
>>> User_initialization_tasks_table = 0x0}
>>>
>>
>> Hmmm.. If you look at this structure in gdb without attaching to the
>> target, what
>> is maximum_tasks?
>>
>> --joel
>>
>>>
>>>
>>>
>>>>
>>>>>
>>>>> It seems that other tasks are LIBBLOCK tasks.
>>>>>
>>>>> Also, this is my Configuration during run:
>>>>> (gdb) p Configuration.stack_space_size
>>>>> $1 = 2648
>>>>> (gdb) p Configuration.work_space_size
>>>>> $2 = 4216
>>>>> (gdb) p Configuration.interrupt_stack_size
>>>>> $3 = 512
>>>>> (gdb) p Configuration.idle_task_stack_size
>>>>> $4 = 512
>>>>>
>>>>
>>>> That looks reasonable. Add CONFIGURE_MAXIMUM_PRIORITY and set it to 4.
>>>> That should
>>>> reduce the workspace.
>>>>
>>>>  long term, we might want to consider lowering it permanently like one
>>>> of the Coldfires
>>>> had to. Or change the default scheduler to the Simple one to save
>>>> memory.
>>>>
>>>>
>>> I haven't dealt with the Scheduler option yet.
>>>
>>>
>>>
>>> --
>>> Regards, Denis Obrezkov
>>>
>>
>> maximum_tasks = 4
> So, is it a linker file issue?
>
> This is it:
> https://github.com/embeddedden/rtems-riscv/blob/hifive1/c/src/lib/libbsp/
> riscv32/hifive1/startup/linkcmds
>
> --
> Regards, Denis Obrezkov
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20170728/23e246b9/attachment-0001.html>


More information about the devel mailing list