[PATCH v2 1/3] score: Enforce stack and TLS alignment

Sebastian Huber sebastian.huber at embedded-brains.de
Tue Mar 2 21:08:48 UTC 2021


On 02/03/2021 20:47, Kinsey Moore wrote:
> Enforce alignment of the stack begin and end by allocating enough stack
> area that the desired size can be aligned to CPU_STACK_ALIGNMENT. This
> also ensures that the space reserved for TLS data falls on stack
> alignment boundaries which satisfies TLS alignment requirements.
> ---
>   cpukit/rtems/src/taskcreate.c       |  5 ++++-
>   cpukit/score/src/threadinitialize.c |  8 +++++---
>   cpukit/score/src/tlsallocsize.c     | 22 ++++++++++++++--------
>   3 files changed, 23 insertions(+), 12 deletions(-)
>
> diff --git a/cpukit/rtems/src/taskcreate.c b/cpukit/rtems/src/taskcreate.c
> index c065a159c6..bad13cb532 100644
> --- a/cpukit/rtems/src/taskcreate.c
> +++ b/cpukit/rtems/src/taskcreate.c
> @@ -53,8 +53,11 @@ static rtems_status_code _RTEMS_tasks_Allocate_and_prepare_stack(
>     thread_config->stack_free = _Stack_Free;
>     size = _Stack_Ensure_minimum( config->storage_size );
>     size = _Stack_Extend_size( size, thread_config->is_fp );
> +  size = RTEMS_ALIGN_UP( size, CPU_STACK_ALIGNMENT );

This can overflow. It needs to move into _Stack_Extend_size() which 
performs overflow handling. I think we need the following invariants 
which should be checked by static assertions (if not already done):

CPU_STACK_ALIGNMENT is a power of two

CPU_HEAP_ALIGNMENT is a power of two

CPU_STACK_ALIGNMENT >= CPU_HEAP_ALIGNMENT

The _Stack_Extend_size() should add CPU_STACK_ALIGNMENT - 1 to the size.
  

>     thread_config->stack_size = size;
> -  thread_config->stack_area = _Stack_Allocate( size );
> +
> +  /* Allocate enough stack space to enforce alignment on use */
> +  thread_config->stack_area = _Stack_Allocate( size + CPU_STACK_ALIGNMENT - 1 );
We don't need this change with the modified _Stack_Extend_size().
>   
>     if ( thread_config->stack_area == NULL ) {
>       return RTEMS_UNSATISFIED;
> diff --git a/cpukit/score/src/threadinitialize.c b/cpukit/score/src/threadinitialize.c
> index 05c30c3d43..04f62df4de 100644
> --- a/cpukit/score/src/threadinitialize.c
> +++ b/cpukit/score/src/threadinitialize.c
> @@ -84,7 +84,9 @@ bool _Thread_Initialize(
>     }
>   #endif
>   
> -  stack_begin = config->stack_area;
> +  /* There is guaranteed to be enough space allocated to enforce alignment */
> +  stack_begin = (void * const) RTEMS_ALIGN_UP( (uintptr_t) config->stack_area,

const?

stack_begin is a char pointer.

> +    CPU_STACK_ALIGNMENT );
Placement of ( ) and arguments wrong.
>     stack_end = stack_begin + config->stack_size;
>   
>     /* Allocate floating-point context in stack area */
> @@ -104,8 +106,8 @@ bool _Thread_Initialize(
>   
>       stack_end -= tls_size;
>       tls_align = (uintptr_t) _TLS_Alignment;
> -    the_thread->Start.tls_area = (void *)
> -      ( ( (uintptr_t) stack_end + tls_align - 1 ) & ~( tls_align - 1 ) );
> +    the_thread->Start.tls_area =
> +      (void *)RTEMS_ALIGN_UP( (uintptr_t)stack_end, tls_align );
' ' between the casts. This change is an unrelated clean up. It should 
be in a separate patch.
>     }
>   
>     _Stack_Initialize(
> diff --git a/cpukit/score/src/tlsallocsize.c b/cpukit/score/src/tlsallocsize.c
> index e7854c677a..0a7ff7d27a 100644
> --- a/cpukit/score/src/tlsallocsize.c
> +++ b/cpukit/score/src/tlsallocsize.c
> @@ -62,23 +62,29 @@ uintptr_t _TLS_Get_allocation_size( void )
>       allocation_size = _TLS_Heap_align_up( size );
>       alignment = _TLS_Heap_align_up( (uintptr_t) _TLS_Alignment );
>   
> -    /*
> -     * The stack allocator does not support aligned allocations.  Allocate
> -     * enough to do the alignment manually.
> -     */
> -    if ( alignment > CPU_HEAP_ALIGNMENT ) {
> -      allocation_size += alignment;
> -    }
> -
>       allocation_size += _TLS_Get_thread_control_block_area_size( alignment );
>   
>   #ifndef __i386__
>       allocation_size += sizeof(TLS_Dynamic_thread_vector);
>   #endif
>   
> +    /*
> +     * Increase the TLS allocation size to the stack alignment requirements
> +     * since it is pulled from the top of the stack allocation. This preserves
> +     * the alignment of the top of the stack.
> +     */
> +    allocation_size = RTEMS_ALIGN_UP( allocation_size, CPU_STACK_ALIGNMENT );
> +
>       if ( _Thread_Maximum_TLS_size != 0 ) {
>         if ( allocation_size <= _Thread_Maximum_TLS_size ) {
>           allocation_size = _Thread_Maximum_TLS_size;
> +
> +        /*
> +         * Decrease the TLS allocation size to the stack alignment requirements
> +         * since the TLS area has reached maximum size.
> +         */
> +        allocation_size =
> +          RTEMS_ALIGN_DOWN( allocation_size, CPU_STACK_ALIGNMENT );
I think we should enforce that _Thread_Maximum_TLS_size is an integral 
multiple of RTEMS_TASK_STORAGE_ALIGNMENT (which needs to be fixed as well).
>         } else {
>           _Internal_error( INTERNAL_ERROR_TOO_LARGE_TLS_SIZE );
>         }

-- 
embedded brains GmbH
Herr Sebastian HUBER
Dornierstr. 4
82178 Puchheim
Germany
email: sebastian.huber at embedded-brains.de
phone: +49-89-18 94 741 - 16
fax:   +49-89-18 94 741 - 08

Registergericht: Amtsgericht München
Registernummer: HRB 157899
Vertretungsberechtigte Geschäftsführer: Peter Rasmussen, Thomas Dörfler
Unsere Datenschutzerklärung finden Sie hier:
https://embedded-brains.de/datenschutzerklaerung/



More information about the devel mailing list