Report on failing tests with thread stack protection and their resolution.
Joel Sherrill
joel at rtems.org
Sun Jan 24 16:10:44 UTC 2021
On Sun, Jan 24, 2021, 8:59 AM Utkarsh Rai <utkarsh.rai60 at gmail.com> wrote:
>
>
> On Sat, Jan 23, 2021 at 9:29 PM Joel Sherrill <joel at rtems.org> wrote:
>
>>
>>
>> On Sat, Jan 23, 2021, 12:28 AM Utkarsh Rai <utkarsh.rai60 at gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Mon, Dec 7, 2020 at 8:00 AM Utkarsh Rai <utkarsh.rai60 at gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Thu, Dec 3, 2020 at 10:22 PM Gedare Bloom <gedare at rtems.org> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Dec 2, 2020 at 5:53 PM Utkarsh Rai <utkarsh.rai60 at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hello,
>>>>>> As discussed in this
>>>>>> <https://lists.rtems.org/pipermail/devel/2020-November/063341.html> thread,
>>>>>> I have compiled a list of the tests that deal with inter stack
>>>>>> communication and fail with the thread stack protection option. Most of
>>>>>> these tests pass when, as Sebastian suggested and had provided a
>>>>>> wonderful example, I disable memory protection at places where contents of
>>>>>> different thread stacks are accessed by the current thread. There are a few
>>>>>> tests that still fail due to inter-stack access in the application code
>>>>>> itself.
>>>>>>
>>>>>> The changes I have made are -
>>>>>>
>>>>>> diff --git a/bsps/arm/realview-pbx-a9/mmu/bsp-set-mmu-attr.c
>>>>>> b/bsps/arm/realview-pbx-a9/mmu/bsp-set-mmu-attr.c
>>>>>> index c176d4b8c5..a45b175395 100644
>>>>>> --- a/bsps/arm/realview-pbx-a9/mmu/bsp-set-mmu-attr.c
>>>>>> +++ b/bsps/arm/realview-pbx-a9/mmu/bsp-set-mmu-attr.c
>>>>>> @@ -1,15 +1,49 @@
>>>>>> #include <bsp/arm-cp15-start.h>
>>>>>> #include <rtems/score/memoryprotection.h>
>>>>>> +#include <rtems/score/threadimpl.h>
>>>>>> #include <libcpu/arm-cp15.h>
>>>>>>
>>>>>> +bool set_memory_flags(Thread_Control* thread, void* arg)
>>>>>> +{
>>>>>> + uintptr_t begin;
>>>>>> + uintptr_t end;
>>>>>> + uint32_t flags;
>>>>>> + rtems_interrupt_level irq_level;
>>>>>> + Thread_Control *executing;
>>>>>> +
>>>>>> + executing = _Thread_Executing;
>>>>>> +
>>>>>> + if(thread != executing) {
>>>>>>
>>>>> This is not concurrency-safe. By time the check happens, or following
>>>>> code happens, the thread could become executing.
>>>>>
>>>>>
>>>>>> +
>>>>>> + flags = *( uint32_t *)( arg );
>>>>>> + begin = thread->Start.Initial_stack.area;
>>>>>> + end = begin + thread->Start.Initial_stack.size;
>>>>>> +
>>>>>> + rtems_interrupt_disable(irq_level);
>>>>>> + arm_cp15_set_translation_table_entries(begin, end, flags);
>>>>>> + rtems_interrupt_enable(irq_level);
>>>>>> + }
>>>>>> +
>>>>>> + return false;
>>>>>>
>>>>> why -- what does the return value mean?
>>>>>
>>>>
>>>> While iterating over threads, if the visitor returns true the iteration
>>>> stops.
>>>>
>>>>
>>>>>
>>>>>
>>>>>> +}
>>>>>> +
>>>>>> +rtems_status_code _Memory_protection_Enable( void )
>>>>>> +{
>>>>>> + uint32_t access_flags;
>>>>>> +
>>>>>> + access_flags = translate_flags( RTEMS_NO_ACCESS );
>>>>>> +
>>>>>> + _Thread_Iterate( set_memory_flags, &access_flags );
>>>>>> +
>>>>>> + return RTEMS_SUCCESSFUL; // check the return values for iterating
>>>>>> function and current method.
>>>>>> +}
>>>>>> +
>>>>>> +rtems_status_code _Memory_protection_Disable( void )
>>>>>> +{
>>>>>> + uint32_t access_flags;
>>>>>> +
>>>>>> + access_flags = translate_flags( RTEMS_READ_WRITE );
>>>>>> +
>>>>>> + _Thread_Iterate( set_memory_flags, &access_flags );
>>>>>> +
>>>>>> + return RTEMS_SUCCESSFUL;
>>>>>> }
>>>>>> \ No newline at end of file
>>>>>> diff --git a/cpukit/include/rtems/score/coremsgimpl.h
>>>>>> b/cpukit/include/rtems/score/coremsgimpl.h
>>>>>> index e598dce96a..3719a3d3c8 100644
>>>>>> --- a/cpukit/include/rtems/score/coremsgimpl.h
>>>>>> +++ b/cpukit/include/rtems/score/coremsgimpl.h
>>>>>> @@ -27,6 +27,10 @@
>>>>>> #include <rtems/score/threaddispatch.h>
>>>>>> #include <rtems/score/threadqimpl.h>
>>>>>>
>>>>>> +#if defined RTEMS_THREAD_STACK_PROTECTION
>>>>>> + #include <rtems/score/memoryprotection.h>
>>>>>> +#endif
>>>>>> +
>>>>>> #include <limits.h>
>>>>>> #include <string.h>
>>>>>>
>>>>>> @@ -586,7 +590,9 @@ RTEMS_INLINE_ROUTINE Thread_Control
>>>>>> *_CORE_message_queue_Dequeue_receiver(
>>>>>> if ( the_thread == NULL ) {
>>>>>> return NULL;
>>>>>> }
>>>>>> -
>>>>>> +#if defined RTEMS_THREAD_STACK_PROTECTION
>>>>>> + _Memory_protection_Disable();
>>>>>>
>>>>> I wonder if it is necessary to disable all protection, or can you just
>>>>> disable the protection applied to 'the_thread' (or maybe to 'executing')?
>>>>>
>>>>
>>>> No, it is not necessary. I will make the changes.
>>>>
>>>>
>>>>>
>>>>>
>>>>>> +#endif
>>>>>> *(size_t *) the_thread->Wait.return_argument = size;
>>>>>> the_thread->Wait.count = (uint32_t) submit_type;
>>>>>>
>>>>>> @@ -595,6 +601,9 @@ RTEMS_INLINE_ROUTINE Thread_Control
>>>>>> *_CORE_message_queue_Dequeue_receiver(
>>>>>> the_thread->Wait.return_argument_second.mutable_object,
>>>>>> size
>>>>>> );
>>>>>> +#if defined RTEMS_THREAD_STACK_PROTECTION
>>>>>> + _Memory_protection_Enable();
>>>>>> +#endif
>>>>>>
>>>>>> _Thread_queue_Extract_critical(
>>>>>> &the_message_queue->Wait_queue.Queue,
>>>>>>
>>>>>> diff --git a/cpukit/posix/src/psignalunblockthread.c
>>>>>> b/cpukit/posix/src/psignalunblockthread.c
>>>>>> index 80a0f33a09..e0f8468de6 100644
>>>>>> --- a/cpukit/posix/src/psignalunblockthread.c
>>>>>> +++ b/cpukit/posix/src/psignalunblockthread.c
>>>>>> @@ -24,6 +24,9 @@
>>>>>> #include <signal.h>
>>>>>>
>>>>>> #include <rtems/score/isr.h>
>>>>>> +#if defined RTEMS_THREAD_STACK_PROTECTION
>>>>>> +#include <rtems/score/memoryprotection.h>
>>>>>> +#endif
>>>>>> #include <rtems/score/threadimpl.h>
>>>>>> #include <rtems/score/threadqimpl.h>
>>>>>> #include <rtems/score/watchdogimpl.h>
>>>>>> @@ -205,6 +208,10 @@ bool _POSIX_signals_Unblock_thread(
>>>>>>
>>>>>> the_info = (siginfo_t *) the_thread->Wait.return_argument;
>>>>>>
>>>>>> +#if defined RTEMS_THREAD_STACK_PROTECTION
>>>>>> +_Memory_protection_Disable();
>>>>>> +#endif
>>>>>> +
>>>>>> if ( !info ) {
>>>>>> the_info->si_signo = signo;
>>>>>> the_info->si_code = SI_USER;
>>>>>> @@ -212,6 +219,9 @@ bool _POSIX_signals_Unblock_thread(
>>>>>> } else {
>>>>>> *the_info = *info;
>>>>>> }
>>>>>> +#if defined RTEMS_THREAD_STACK_PROTECTION
>>>>>> +_Memory_protection_Enable();
>>>>>> +#endif
>>>>>>
>>>>>> _Thread_queue_Extract_with_proxy( the_thread );
>>>>>> return _POSIX_signals_Unblock_thread_done( the_thread, api,
>>>>>> true );
>>>>>> diff --git a/cpukit/rtems/src/eventsurrender.c
>>>>>> b/cpukit/rtems/src/eventsurrender.c
>>>>>> index 49f77d2663..5de62ec292 100644
>>>>>> --- a/cpukit/rtems/src/eventsurrender.c
>>>>>> +++ b/cpukit/rtems/src/eventsurrender.c
>>>>>> @@ -23,6 +23,10 @@
>>>>>> #include <rtems/score/threadimpl.h>
>>>>>> #include <rtems/score/watchdogimpl.h>
>>>>>>
>>>>>> +#if defined RTEMS_THREAD_STACK_PROTECTION
>>>>>> + #include <rtems/score/memoryprotection.h>
>>>>>> +#endif
>>>>>> +
>>>>>> static void _Event_Satisfy(
>>>>>> Thread_Control *the_thread,
>>>>>> Event_Control *event,
>>>>>> @@ -31,7 +35,13 @@ static void _Event_Satisfy(
>>>>>> )
>>>>>> {
>>>>>> event->pending_events = _Event_sets_Clear( pending_events,
>>>>>> seized_events );
>>>>>> +#if defined RTEMS_THREAD_STACK_PROTECTION
>>>>>> + _Memory_protection_Disable();
>>>>>> +#endif
>>>>>> *(rtems_event_set *) the_thread->Wait.return_argument =
>>>>>> seized_events;
>>>>>> +#if defined RTEMS_THREAD_STACK_PROTECTION
>>>>>> + _Memory_protection_Enable();
>>>>>> +#endif
>>>>>> }
>>>>>>
>>>>>> static bool _Event_Is_blocking_on_event(
>>>>>> diff --git a/cpukit/rtems/src/regionprocessqueue.c
>>>>>> b/cpukit/rtems/src/regionprocessqueue.c
>>>>>> index 4adaf66674..29b078a38c 100644
>>>>>> --- a/cpukit/rtems/src/regionprocessqueue.c
>>>>>> +++ b/cpukit/rtems/src/regionprocessqueue.c
>>>>>> @@ -22,6 +22,10 @@
>>>>>> #include <rtems/score/status.h>
>>>>>> #include <rtems/score/threadqimpl.h>
>>>>>>
>>>>>> +#if defined RTEMS_THREAD_STACK_PROTECTION
>>>>>> + #include <rtems/score/memoryprotection.h>
>>>>>> +#endif
>>>>>> +
>>>>>> void _Region_Process_queue(
>>>>>> Region_Control *the_region
>>>>>> )
>>>>>> @@ -63,8 +67,13 @@ void _Region_Process_queue(
>>>>>>
>>>>>> if ( the_segment == NULL )
>>>>>> break;
>>>>>> -
>>>>>> +#if defined RTEMS_THREAD_STACK_PROTECTION
>>>>>> + _Memory_protection_Disable();
>>>>>> +#endif
>>>>>> *(void **)the_thread->Wait.return_argument = the_segment;
>>>>>> +#if defined RTEMS_THREAD_STACK_PROTECTION
>>>>>> + _Memory_protection_Enable();
>>>>>> +#endif
>>>>>> _Thread_queue_Extract( the_thread );
>>>>>> the_thread->Wait.return_code = STATUS_SUCCESSFUL;
>>>>>> }
>>>>>>
>>>>>> _______________________________________________
>>>>>> devel mailing list
>>>>>> devel at rtems.org
>>>>>> http://lists.rtems.org/mailman/listinfo/devel
>>>>>
>>>>>
>>>
>>> Hello,
>>> Sorry for the long break but my university examinations had put me off
>>> track. A quick review of work done till now -
>>> The tests where inter-stack access takes place were identified (the test
>>> report is attached in this thread) and the concerning mechanisms were made
>>> thread-stack protection compatible. The current method of disabling memory
>>> protection for these
>>> regions uses _Thread_Iterate() which is bad in terms of performance, I
>>> have modified it by disabling the memory protection solely for
>>> 'the_thread'. The issue that remains is of User extension iterators, in
>>> particular of nested iterators. My idea is to disable memory protection in
>>> places during iteration where inter-stack access takes place. The problem
>>> is determining the region for which memory protection needs to be disabled.
>>> One way to get around this would be by disabling memory protection for all
>>> the stacks, in a blanket-based manner. The other possibility is to access
>>> the last blocked thread and disable memory protection just for the stack of
>>> this thread (as inter-stack access of the previously blocked thread takes
>>> place during iteration). How to get access to the queue of blocked threads?
>>> And is this method feasible?
>>>
>>
>> Is it possible to identify the reasons that threads fail this check? I'm
>> suspicious there are a few places where disabling checks entirely for the
>> scope of an RTEMS method is needed. Running the stack usage report would
>> seem to be a case for that.
>>
>
> Ok, this is an area that I have not looked into.
>
>
>>
>> But other cases might be writing method results like a message buffer on
>> a stack would need to get access to a specific task just long enough to
>> write the result.
>>
>>
> I was able to identify these regions, Sebastian had provided me with a
> nice patch
> <https://lists.rtems.org/pipermail/devel/2020-November/063354.html>that
> fixed cases like these.
>
Ok. There is a lot to remember. :)
>
> Random cases in applications can't be generally solved but RTEMS itself is
>> a finite set of code. Identifying the reasons RTEMS needs access and
>> handling those might eliminate most of these. Fix one case and multiple
>> tests may pass.
>>
>> Also important to know when these cases occur in RTEMS.
>>
>> Just a thought.
>>
>
> I believe the test report documents most of the important cases where
> inter-stack access place 'inside' the RTEMS kernel code. The case that
> remains to be addressed is of User extension iterators.
>
Thinking out loud here. Either this can be dealt with by disabling
protection for the threads passed into the iterator for the entire loop. Or
it can be disabled around each call. Or it can be left to the
implementation of the function invoked.
It is not guaranteed that an extension needs stack access. But many
actually do. I'm thinking of the threads stack usage extensions, trace
extensions, and capture engine extensions. I think only one of those needs
access to a thread stack.
If a review of all the thread extensions shows that only a minority need
protection disabled, then I would lean toward it being responsibility of
the function invoked to do it. Certainly limits the window of protection
being disabled to the most precise.
At some point it becomes the responsibility of the application to know it's
sharing data like this, and user extensions are by definition user code.
Even if provided by RTEMS.
>
>
>>
>> --joel
>>
>>
>> _______________________________________________
>>> devel mailing list
>>> devel at rtems.org
>>> http://lists.rtems.org/mailman/listinfo/devel
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20210124/6d5adb97/attachment-0001.html>
More information about the devel
mailing list