[PATCH] CONFIGURE_MAXIMUM_THREAD_LOCAL_STORAGE_SIZE

Sebastian Huber sebastian.huber at embedded-brains.de
Sun Sep 13 08:31:12 UTC 2020


On 12/09/2020 01:31, Chris Johns wrote:

> On 12/9/20 12:10 am, Joel Sherrill wrote:
> [...]
>> Did we decide if there had to be an explicit configure
>> option to even use this API? Chris and I discussed this
>> as an idea to force a user to make a very conscious
>> decision to allow it.
> Sebastian did not like the idea and accepted that so not at the moment. The
> solution is better config management and we will monitor how this goes.
My point is that an API enable and the now implemented 
maximum_thread_local_storage_size both lead to run-time errors of the 
new directive. However, the maximum_thread_local_storage_size ensures 
that you don't have an unexpected task storage configuration, for 
example a too small thread stack size which could be difficult to debug.
>
>> There does need to be documentation on the allocation
>> strategies, when to use, limitations, and particularly the
>> use of rtems-exeinfo to get the TLS size. This is one case
>> for sure where an RTEMS Tool needs explicit mention in
>> both the API and configure option sections.
> I agree. This one needs a little more than the others.

I updated the ticket description:

https://devel.rtems.org/ticket/3959

>
>> I have had flashbacks to how often we got used to get questions
>> about why adding a sleep(1) in the middle of hello world locked
>> up. Then we added an option to say "does not need clock
>> driver" and the user questions stopped.  I would rather be a
>> bit aggressive on setup and avoid this for tasks with statically
>> allocated resources.
> I am concerned we have really difficult to debug issues that appear to be bugs
> but are weakness in the configuration.
I think it is now quite safe. If something is wrongly configured, then 
you get an error status (RTEMS_INVALID_SIZE). You don't get problems 
like a suddenly too small thread stack with a potential overflow.


More information about the devel mailing list