Optimization issue in RISC-V BSP

Joel Sherrill joel at rtems.org
Sun Jul 30 00:34:25 UTC 2017


Sorry to top post but this thread is very deep to answer on a phone.

Try looking at the same code on the erc32 bsp and see how it is done.

Also you could disable the atexit() call and see how.much further you get.

On Jul 29, 2017 7:03 PM, "Denis Obrezkov" <denisobrezkov at gmail.com> wrote:

2017-07-30 1:35 GMT+02:00 Denis Obrezkov <denisobrezkov at gmail.com>:

> 2017-07-29 19:14 GMT+02:00 Joel Sherrill <joel at rtems.org>:
>
>>
>>
>> On Jul 29, 2017 4:04 AM, "Denis Obrezkov" <denisobrezkov at gmail.com>
>> wrote:
>>
>> 2017-07-29 3:45 GMT+02:00 Joel Sherrill <joel at rtems.org>:
>>
>>>
>>>
>>> On Jul 28, 2017 7:11 PM, "Denis Obrezkov" <denisobrezkov at gmail.com>
>>> wrote:
>>>
>>> 2017-07-29 1:41 GMT+02:00 Joel Sherrill <joel at rtems.org>:
>>>
>>>>
>>>>
>>>> On Jul 28, 2017 6:39 PM, "Denis Obrezkov" <denisobrezkov at gmail.com>
>>>> wrote:
>>>>
>>>> 2017-07-29 1:28 GMT+02:00 Joel Sherrill <joel at rtems.org>:
>>>>
>>>>>
>>>>>
>>>>> On Jul 28, 2017 6:14 PM, "Denis Obrezkov" <denisobrezkov at gmail.com>
>>>>> wrote:
>>>>>
>>>>> 2017-07-29 0:57 GMT+02:00 Joel Sherrill <joel at rtems.org>:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Jul 28, 2017 5:55 PM, "Denis Obrezkov" <denisobrezkov at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> 2017-07-28 22:36 GMT+02:00 Joel Sherrill <joel at rtems.org>:
>>>>>>
>>>>>>> Can you check the memory immediately after a download'?
>>>>>>>
>>>>>>> Then after the loop that copies initialized data into place?
>>>>>>>
>>>>>>> I suspect something off there. Could be a linker script issue or the
>>>>>>> copy gone crazy.
>>>>>>>
>>>>>>> --joel
>>>>>>>
>>>>>>> On Fri, Jul 28, 2017 at 3:20 PM, Denis Obrezkov <
>>>>>>> denisobrezkov at gmail.com> wrote:
>>>>>>>
>>>>>>>> 2017-07-28 22:16 GMT+02:00 Joel Sherrill <joel at rtems.org>:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Jul 28, 2017 at 2:50 PM, Denis Obrezkov <
>>>>>>>>> denisobrezkov at gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>> I can see that during task initialization I have a call:
>>>>>>>>>>>>  _Thread_Initialize_information (information=information at entry=0x80000ad4
>>>>>>>>>>>> <_RTEMS_tasks_Information>, the_api=the_api at entry=OBJECTS_CLASSIC_API,
>>>>>>>>>>>> the_class=the_class at entry=1, maximum=124,
>>>>>>>>>>>>     is_string=is_string at entry=false,
>>>>>>>>>>>> maximum_name_length=maximum_name_length at entry=4)
>>>>>>>>>>>>
>>>>>>>>>>>> And maximum is 124, but I have a configuration parameter:
>>>>>>>>>>>> #define CONFIGURE_MAXIMUM_TASKS             4
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I can't imagine any standard RTEMS test configuring that many
>>>>>>>>>>> tasks.
>>>>>>>>>>> Is there a data corruption issue?
>>>>>>>>>>>
>>>>>>>>>>> 124 = 0x7c which doesn't ring any bells for me on odd memory
>>>>>>>>>>> issues.
>>>>>>>>>>>
>>>>>>>>>>> What is the contents of "Configuration_RTEMS_API"?
>>>>>>>>>>>
>>>>>>>>>> Oh, I change my configuration options a bit, they are:
>>>>>>>>>>   #define CONFIGURE_APPLICATION_NEEDS_CLOCK_DRIVER
>>>>>>>>>>   #define CONFIGURE_APPLICATION_DISABLE_FILESYSTEM
>>>>>>>>>>   #define CONFIGURE_DISABLE_NEWLIB_REENTRANCY
>>>>>>>>>>   #define CONFIGURE_TERMIOS_DISABLED
>>>>>>>>>>   #define CONFIGURE_LIBIO_MAXIMUM_FILE_DESCRIPTORS 0
>>>>>>>>>>   #define CONFIGURE_MINIMUM_TASK_STACK_SIZE 512
>>>>>>>>>>   #define CONFIGURE_MAXIMUM_PRIORITY 3
>>>>>>>>>>   #define CONFIGURE_DISABLE_CLASSIC_API_NOTEPADS
>>>>>>>>>>   #define CONFIGURE_IDLE_TASK_BODY Init
>>>>>>>>>>   #define CONFIGURE_IDLE_TASK_INITIALIZES_APPLICATION
>>>>>>>>>>   #define CONFIGURE_TASKS 4
>>>>>>>>>>
>>>>>>>>>>   #define CONFIGURE_MAXIMUM_TASKS             4
>>>>>>>>>>
>>>>>>>>>>   #define CONFIGURE_UNIFIED_WORK_AREAS
>>>>>>>>>>
>>>>>>>>>> Also it is the test from a lower ticker example.
>>>>>>>>>> Configuration_RTEMS_API with -O0 option:
>>>>>>>>>> {maximum_tasks = 5, maximum_timers = 0, maximum_semaphores = 7,
>>>>>>>>>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 0,
>>>>>>>>>> maximum_ports = 0, maximum_periods = 0,
>>>>>>>>>>   maximum_barriers = 0, number_of_initialization_tasks = 0,
>>>>>>>>>> User_initialization_tasks_table = 0x0}
>>>>>>>>>>
>>>>>>>>>> with -Os option:
>>>>>>>>>> {maximum_tasks = 124, maximum_timers = 0, maximum_semaphores = 7,
>>>>>>>>>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 0,
>>>>>>>>>> maximum_ports = 0, maximum_periods = 0,
>>>>>>>>>>   maximum_barriers = 0, number_of_initialization_tasks = 0,
>>>>>>>>>> User_initialization_tasks_table = 0x0}
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hmmm.. If you look at this structure in gdb without attaching to
>>>>>>>>> the target, what
>>>>>>>>> is maximum_tasks?
>>>>>>>>>
>>>>>>>>> --joel
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> It seems that other tasks are LIBBLOCK tasks.
>>>>>>>>>>>>
>>>>>>>>>>>> Also, this is my Configuration during run:
>>>>>>>>>>>> (gdb) p Configuration.stack_space_size
>>>>>>>>>>>> $1 = 2648
>>>>>>>>>>>> (gdb) p Configuration.work_space_size
>>>>>>>>>>>> $2 = 4216
>>>>>>>>>>>> (gdb) p Configuration.interrupt_stack_size
>>>>>>>>>>>> $3 = 512
>>>>>>>>>>>> (gdb) p Configuration.idle_task_stack_size
>>>>>>>>>>>> $4 = 512
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> That looks reasonable. Add CONFIGURE_MAXIMUM_PRIORITY and set it
>>>>>>>>>>> to 4. That should
>>>>>>>>>>> reduce the workspace.
>>>>>>>>>>>
>>>>>>>>>>>  long term, we might want to consider lowering it permanently
>>>>>>>>>>> like one of the Coldfires
>>>>>>>>>>> had to. Or change the default scheduler to the Simple one to
>>>>>>>>>>> save memory.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>> I haven't dealt with the Scheduler option yet.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Regards, Denis Obrezkov
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> maximum_tasks = 4
>>>>>>>> So, is it a linker file issue?
>>>>>>>>
>>>>>>>> This is it:
>>>>>>>> https://github.com/embeddedden/rtems-riscv/blob/hifive1/c/sr
>>>>>>>> c/lib/libbsp/riscv32/hifive1/startup/linkcmds
>>>>>>>>
>>>>>>>> --
>>>>>>>> Regards, Denis Obrezkov
>>>>>>>>
>>>>>>>
>>>>>>> After download:
>>>>>> {maximum_tasks = 938162044, maximum_timers = 1270834941,
>>>>>> maximum_semaphores = 2534801264 <(253)%20480-1264>,
>>>>>> maximum_message_queues = 425684620, maximum_partitions = 1496738036,
>>>>>>   maximum_regions = 3085560870 <(308)%20556-0870>, maximum_ports =
>>>>>> 4269782132, maximum_periods = 2362012542 <(236)%20201-2542>,
>>>>>> maximum_barriers = 1138223297, number_of_initialization_tasks = 4224313421,
>>>>>>   User_initialization_tasks_table = 0x43bd1bd3}
>>>>>>
>>>>>> right after data copying:
>>>>>> {maximum_tasks = 124, maximum_timers = 0, maximum_semaphores = 1,
>>>>>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 0,
>>>>>> maximum_ports = 0, maximum_periods = 0,
>>>>>>   maximum_barriers = 0, number_of_initialization_tasks = 0,
>>>>>> User_initialization_tasks_table = 0x0}
>>>>>>
>>>>>> But I found the mistake - I made it in .data initialization code
>>>>>> (https://github.com/embeddedden/rtems-riscv/blob/hifive1/c/s
>>>>>> rc/lib/libbsp/riscv32/hifive1/start/start.S#L116 - first byte in the
>>>>>> loop was uninitialized)
>>>>>>
>>>>>>
>>>>>> Awesome! Does that mean it is running?
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Regards, Denis Obrezkov
>>>>>>
>>>>>>
>>>>>> Yes, it is running now. Not far, but running.
>>>>> Now I am having an exception during atexit( Clock_exit )
>>>>>
>>>>>
>>>>> Does it get to bsp_cleanup and bsp_reset? Are you seeing the Terminate?
>>>>>
>>>>> I think those are the names. Basically some BSPs deliberately throw an
>>>>> exception as the way to end.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Regards, Denis Obrezkov
>>>>>
>>>>>
>>>>> Unfortunately, I have an exception in the beginning during clock
>>>> driver initialization around this line:
>>>> 0x204053a0      80      in ../../../../../gcc-7.1.0/newli
>>>> b/libc/stdlib/__atexit.c
>>>>
>>>>
>>>> No obvious suggestions from me right now except to debug.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Regards, Denis Obrezkov
>>>>
>>>>
>>>> Is it possible that newlib has a wrong linker file? That a variable
>>> placed out of the rtems-linkcmd-defined bounds?
>>>
>>>
>>> I don't think so. Look at the atexit source and see what it assume is
>>> initialized
>>>
>>>
>>>
>>>
>>> --
>>> Regards, Denis Obrezkov
>>>
>>>
>>> I found out that the error occurs in gcc-7.1.0/newlib/libc/stdlib/_
>> _atexit.c:80:
>> p = _GLOBAL_ATEXIT;
>>
>> p defined like:
>> register struct _atexit *p;
>>
>> on the one hand the value of p is:
>> (gdb) print p
>> $56 = <optimized out>
>>
>> on the other hand:
>> _GLOBAL_ATEXIT:
>>
>> #ifdef _REENT_GLOBAL_ATEXIT
>> extern struct _atexit *_global_atexit; /* points to head of LIFO stack */
>> # define _GLOBAL_ATEXIT _global_atexit
>> #else
>> # define _GLOBAL_ATEXIT (_GLOBAL_REENT->_atexit)
>> #endif
>>
>> and _REENT_GLOBAL_ATEXIT should be defined due to
>> (newlib/libc/include/sys):
>> #if defined(__rtems__)
>> #define __FILENAME_MAX__ 255
>> #define _READ_WRITE_RETURN_TYPE _ssize_t
>> #define __DYNAMIC_REENT__
>> #define _REENT_GLOBAL_ATEXIT
>> #endif
>>
>> but _global_atexit  located at random locations outside of my memory
>> regions.
>>
>>
>> Does it show up at a valid address when you look at the nm output?
>>
>> Does it behave differently when you drop optimization?
>>
>>
>>
>> --
>> Regards, Denis Obrezkov
>>
>>
> Oh, the pointer itself is located in a proper location, but the value it
> stores is out of memory.
> Anyway, it fails before it is able to utilize the wrong value.
> It behaves the same way without optimization.
>
> --
> Regards, Denis Obrezkov
>

I've also found that my bad address (caught in mbadaddr register)
is 0x1a913cb0:
(gdb) x /3i 0x1a913cb0-4
   0x1a913cac:  unimp
   0x1a913cae:  unimp
   0x1a913cb0:  unimp


At the same time:
(gdb) p _global_atexit
$5 = (struct _atexit *) 0x1a913cac

Located very close to the wrong address.
-- 
Regards, Denis Obrezkov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20170729/29901b5f/attachment-0002.html>


More information about the devel mailing list