Optimization issue in RISC-V BSP
Joel Sherrill
joel at rtems.org
Fri Jul 28 19:36:35 UTC 2017
On Fri, Jul 28, 2017 at 10:31 AM, Denis Obrezkov <denisobrezkov at gmail.com>
wrote:
> 2017-07-28 17:05 GMT+02:00 Joel Sherrill <joel at rtems.org>:
>
>>
>>
>> On Fri, Jul 28, 2017 at 9:23 AM, Denis Obrezkov <denisobrezkov at gmail.com>
>> wrote:
>>
>>> 2017-07-28 15:19 GMT+02:00 Sebastian Huber <
>>> sebastian.huber at embedded-brains.de>:
>>>
>>>> On 28/07/17 15:15, Denis Obrezkov wrote:
>>>>
>>>> 2017-07-28 14:56 GMT+02:00 Joel Sherrill <joel at rtems.org <mailto:
>>>>> joel at rtems.org>>:
>>>>>
>>>>> There is a debug option near the bottom of confdefs.h which you
>>>>> can enable to generate a data structure filled in with various
>>>>> values computed by confdefs. You can look at that in gdb without
>>>>> loading it on the target.
>>>>>
>>>>> It is probably worth it to look at that now and see what you spot.
>>>>>
>>>>> Okay, I'll try.
>>>>>
>>>>>
>>>>> And a random thought.. what's the interrupt stack size and how is
>>>>> it allocated? What are the port specific related macros set to?
>>>>>
>>>>> I don't completely understand what is the interrupt stack. Because,
>>>>> when an interrupt occurs,
>>>>> I save all registers and move the stack pointer, handle interrupt,
>>>>>
>>>>
>>>> Now you handle the interrupt on the stack of the interrupted context
>>>> (usually a thread). So, you must take this overhead into account for every
>>>> thread. If you switch to an interrupt stack, then you only have to account
>>>> for one interrupt frame per thread. If you support nested interrupts, you
>>>> need even more space per thread without an interrupt stack.
>>>>
>>>
>>> Am I understand right that RTEMS has a wrapper for interrupt handlers
>>> that creates a dedicated 'interrupt handler' task?
>>> I think it is not, since we call ISR directly from assembler code, thus
>>> should we save registers in a dedicated interrupt stack
>>> by ourselves in the ISR?
>>>
>>
>> There can be interrupt server threads but that is a distinct topic.
>>
>> Interrupts have to execute on a stack. As Gedare said, you are currently
>> processing each
>> ISR on the stack of the thread that was interrupted. Most of the other
>> ports have a dedicated
>> interrupt stack and switch to it.
>>
>> There is a worst case stack usage for each thread and the interrupts.
>> Without a dedicated
>> interrupt stack, each task must account for its worst usage and the worst
>> case interrupt
>> when allocating its stack.
>>
>>
>>>
>>> Also, I don't understand what do you mean by: " you only have to account
>>> for one interrupt frame per thread".
>>> And what is an 'interrupt frame'? I have found something in a relatively
>>> old guide:
>>> https://docs.rtems.org/releases/rtemsdocs-4.9.6/share/rtems/
>>> html/porting/porting00033.html
>>> but it doesn't make it clear.
>>>
>>
>> An interrupt frame is the set of data that must be saved for each
>> interrupt occurrence. The
>> "only have to save one" is because you can switch to a dedicated stack
>> where interrupts
>> are processed and possibly nested. Thus you have the usage for the
>> processing and
>> possible nested interrupts.
>>
>> But you need to figure out why the memory allocated changed. Optimization
>> level should be
>> independent of that.
>>
>>
>>> --
>>> Regards, Denis Obrezkov
>>>
>>
>>
> I can see that during task initialization I have a call:
> _Thread_Initialize_information (information=information at entry=0x80000ad4
> <_RTEMS_tasks_Information>, the_api=the_api at entry=OBJECTS_CLASSIC_API,
> the_class=the_class at entry=1, maximum=124,
> is_string=is_string at entry=false, maximum_name_length=maximum_
> name_length at entry=4)
>
> And maximum is 124, but I have a configuration parameter:
> #define CONFIGURE_MAXIMUM_TASKS 4
>
I can't imagine any standard RTEMS test configuring that many tasks.
Is there a data corruption issue?
124 = 0x7c which doesn't ring any bells for me on odd memory issues.
What is the contents of "Configuration_RTEMS_API"?
>
> It seems that other tasks are LIBBLOCK tasks.
>
> Also, this is my Configuration during run:
> (gdb) p Configuration.stack_space_size
> $1 = 2648
> (gdb) p Configuration.work_space_size
> $2 = 4216
> (gdb) p Configuration.interrupt_stack_size
> $3 = 512
> (gdb) p Configuration.idle_task_stack_size
> $4 = 512
>
That looks reasonable. Add CONFIGURE_MAXIMUM_PRIORITY and set it to 4. That
should
reduce the workspace.
long term, we might want to consider lowering it permanently like one of
the Coldfires
had to. Or change the default scheduler to the Simple one to save memory.
--joel
>
> --
> Regards, Denis Obrezkov
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20170728/c314a35c/attachment-0002.html>
More information about the devel
mailing list