Optimization issue in RISC-V BSP

Denis Obrezkov denisobrezkov at gmail.com
Fri Jul 28 23:39:17 UTC 2017


2017-07-29 1:28 GMT+02:00 Joel Sherrill <joel at rtems.org>:

>
>
> On Jul 28, 2017 6:14 PM, "Denis Obrezkov" <denisobrezkov at gmail.com> wrote:
>
> 2017-07-29 0:57 GMT+02:00 Joel Sherrill <joel at rtems.org>:
>
>>
>>
>> On Jul 28, 2017 5:55 PM, "Denis Obrezkov" <denisobrezkov at gmail.com>
>> wrote:
>>
>> 2017-07-28 22:36 GMT+02:00 Joel Sherrill <joel at rtems.org>:
>>
>>> Can you check the memory immediately after a download'?
>>>
>>> Then after the loop that copies initialized data into place?
>>>
>>> I suspect something off there. Could be a linker script issue or the
>>> copy gone crazy.
>>>
>>> --joel
>>>
>>> On Fri, Jul 28, 2017 at 3:20 PM, Denis Obrezkov <denisobrezkov at gmail.com
>>> > wrote:
>>>
>>>> 2017-07-28 22:16 GMT+02:00 Joel Sherrill <joel at rtems.org>:
>>>>
>>>>>
>>>>>
>>>>> On Fri, Jul 28, 2017 at 2:50 PM, Denis Obrezkov <
>>>>> denisobrezkov at gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>>> I can see that during task initialization I have a call:
>>>>>>>>  _Thread_Initialize_information (information=information at entry=0x80000ad4
>>>>>>>> <_RTEMS_tasks_Information>, the_api=the_api at entry=OBJECTS_CLASSIC_API,
>>>>>>>> the_class=the_class at entry=1, maximum=124,
>>>>>>>>     is_string=is_string at entry=false, maximum_name_length=maximum_na
>>>>>>>> me_length at entry=4)
>>>>>>>>
>>>>>>>> And maximum is 124, but I have a configuration parameter:
>>>>>>>> #define CONFIGURE_MAXIMUM_TASKS             4
>>>>>>>>
>>>>>>>
>>>>>>> I can't imagine any standard RTEMS test configuring that many tasks.
>>>>>>> Is there a data corruption issue?
>>>>>>>
>>>>>>> 124 = 0x7c which doesn't ring any bells for me on odd memory issues.
>>>>>>>
>>>>>>> What is the contents of "Configuration_RTEMS_API"?
>>>>>>>
>>>>>> Oh, I change my configuration options a bit, they are:
>>>>>>   #define CONFIGURE_APPLICATION_NEEDS_CLOCK_DRIVER
>>>>>>   #define CONFIGURE_APPLICATION_DISABLE_FILESYSTEM
>>>>>>   #define CONFIGURE_DISABLE_NEWLIB_REENTRANCY
>>>>>>   #define CONFIGURE_TERMIOS_DISABLED
>>>>>>   #define CONFIGURE_LIBIO_MAXIMUM_FILE_DESCRIPTORS 0
>>>>>>   #define CONFIGURE_MINIMUM_TASK_STACK_SIZE 512
>>>>>>   #define CONFIGURE_MAXIMUM_PRIORITY 3
>>>>>>   #define CONFIGURE_DISABLE_CLASSIC_API_NOTEPADS
>>>>>>   #define CONFIGURE_IDLE_TASK_BODY Init
>>>>>>   #define CONFIGURE_IDLE_TASK_INITIALIZES_APPLICATION
>>>>>>   #define CONFIGURE_TASKS 4
>>>>>>
>>>>>>   #define CONFIGURE_MAXIMUM_TASKS             4
>>>>>>
>>>>>>   #define CONFIGURE_UNIFIED_WORK_AREAS
>>>>>>
>>>>>> Also it is the test from a lower ticker example.
>>>>>> Configuration_RTEMS_API with -O0 option:
>>>>>> {maximum_tasks = 5, maximum_timers = 0, maximum_semaphores = 7,
>>>>>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 0,
>>>>>> maximum_ports = 0, maximum_periods = 0,
>>>>>>   maximum_barriers = 0, number_of_initialization_tasks = 0,
>>>>>> User_initialization_tasks_table = 0x0}
>>>>>>
>>>>>> with -Os option:
>>>>>> {maximum_tasks = 124, maximum_timers = 0, maximum_semaphores = 7,
>>>>>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 0,
>>>>>> maximum_ports = 0, maximum_periods = 0,
>>>>>>   maximum_barriers = 0, number_of_initialization_tasks = 0,
>>>>>> User_initialization_tasks_table = 0x0}
>>>>>>
>>>>>
>>>>> Hmmm.. If you look at this structure in gdb without attaching to the
>>>>> target, what
>>>>> is maximum_tasks?
>>>>>
>>>>> --joel
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> It seems that other tasks are LIBBLOCK tasks.
>>>>>>>>
>>>>>>>> Also, this is my Configuration during run:
>>>>>>>> (gdb) p Configuration.stack_space_size
>>>>>>>> $1 = 2648
>>>>>>>> (gdb) p Configuration.work_space_size
>>>>>>>> $2 = 4216
>>>>>>>> (gdb) p Configuration.interrupt_stack_size
>>>>>>>> $3 = 512
>>>>>>>> (gdb) p Configuration.idle_task_stack_size
>>>>>>>> $4 = 512
>>>>>>>>
>>>>>>>
>>>>>>> That looks reasonable. Add CONFIGURE_MAXIMUM_PRIORITY and set it to
>>>>>>> 4. That should
>>>>>>> reduce the workspace.
>>>>>>>
>>>>>>>  long term, we might want to consider lowering it permanently like
>>>>>>> one of the Coldfires
>>>>>>> had to. Or change the default scheduler to the Simple one to save
>>>>>>> memory.
>>>>>>>
>>>>>>>
>>>>>> I haven't dealt with the Scheduler option yet.
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Regards, Denis Obrezkov
>>>>>>
>>>>>
>>>>> maximum_tasks = 4
>>>> So, is it a linker file issue?
>>>>
>>>> This is it:
>>>> https://github.com/embeddedden/rtems-riscv/blob/hifive1/c/sr
>>>> c/lib/libbsp/riscv32/hifive1/startup/linkcmds
>>>>
>>>> --
>>>> Regards, Denis Obrezkov
>>>>
>>>
>>> After download:
>> {maximum_tasks = 938162044, maximum_timers = 1270834941,
>> maximum_semaphores = 2534801264 <(253)%20480-1264>,
>> maximum_message_queues = 425684620, maximum_partitions = 1496738036,
>>   maximum_regions = 3085560870 <(308)%20556-0870>, maximum_ports =
>> 4269782132, maximum_periods = 2362012542 <(236)%20201-2542>,
>> maximum_barriers = 1138223297, number_of_initialization_tasks = 4224313421,
>>   User_initialization_tasks_table = 0x43bd1bd3}
>>
>> right after data copying:
>> {maximum_tasks = 124, maximum_timers = 0, maximum_semaphores = 1,
>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 0,
>> maximum_ports = 0, maximum_periods = 0,
>>   maximum_barriers = 0, number_of_initialization_tasks = 0,
>> User_initialization_tasks_table = 0x0}
>>
>> But I found the mistake - I made it in .data initialization code
>> (https://github.com/embeddedden/rtems-riscv/blob/hifive1/c/s
>> rc/lib/libbsp/riscv32/hifive1/start/start.S#L116 - first byte in the
>> loop was uninitialized)
>>
>>
>> Awesome! Does that mean it is running?
>>
>>
>>
>> --
>> Regards, Denis Obrezkov
>>
>>
>> Yes, it is running now. Not far, but running.
> Now I am having an exception during atexit( Clock_exit )
>
>
> Does it get to bsp_cleanup and bsp_reset? Are you seeing the Terminate?
>
> I think those are the names. Basically some BSPs deliberately throw an
> exception as the way to end.
>
>
>
>
>
> --
> Regards, Denis Obrezkov
>
>
> Unfortunately, I have an exception in the beginning during clock driver
initialization around this line:
0x204053a0      80      in
../../../../../gcc-7.1.0/newlib/libc/stdlib/__atexit.c




-- 
Regards, Denis Obrezkov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20170729/00d3c135/attachment-0002.html>


More information about the devel mailing list