Project for GSoC 2020

Sebastian Huber sebastian.huber at embedded-brains.de
Mon Mar 2 17:25:44 UTC 2020


----- Am 2. Mrz 2020 um 17:44 schrieb Gedare Bloom gedare at rtems.org:

> On Mon, Mar 2, 2020 at 9:37 AM Joel Sherrill <joel at rtems.org> wrote:
>>
>>
>>
>> On Mon, Mar 2, 2020 at 10:12 AM Gedare Bloom <gedare at rtems.org> wrote:
>>>
>>> On Mon, Mar 2, 2020 at 9:05 AM Joel Sherrill <joel at rtems.org> wrote:
>>> >
>>> >
>>> >
>>> > On Mon, Mar 2, 2020 at 9:33 AM Gedare Bloom <gedare at rtems.org> wrote:
>>> >>
>>> >> On Sat, Feb 29, 2020 at 2:58 PM Utkarsh Rai <utkarsh.rai60 at gmail.com> wrote:
>>> >> >
>>> >> > I have gone through the details of the project adding memory protection using
>>> >> > MMU and have a few questions and observations regarding the same-
>>> >> >
>>> >> > 1. Is this a project for which someone from the community would be willing to
>>> >> > mentor for GSoC?
>>> >>
>>> >> Yes, Peter Dufault has expressed interest. There are also several
>>> >> generally interested parties that may like to stay in the loop.
>>> >>
>>> >> > 2. As far I could understand by looking into the rtems tree ARM uses static
>>> >> > initialization wherein it first invalidates the cache and TLB blocks and then
>>> >> > performs initialization by setting up a 1 to 1 mapping of the parts of address
>>> >> > space. According to me, static initialization of the mmu is a generic enough
>>> >> > method that can be utilized in most of the architectures.
>>> >>
>>> >> Yes, it should be. That is how most architectures will do it, if they
>>> >> need to. Some might disable the MMU/MPU and not bother, then there is
>>> >> some work to do to figure out how to enable the static/init-time
>>> >> memory map.
>>> >>
>>> >> > 3. For the thread stack protection, I believe either of the stack guard
>>> >> > protection approach or by verification of stack canaries whereby the OS on each
>>> >> > context switch would check whether the numbers associated with the canaries are
>>> >> > still intact or not are worth considering. Although I still only have a
>>> >> > high-level idea of both of these approaches and will be looking into their
>>> >> > implementation details, I would request your kind feedback on it.
>>> >> >
>>> >>
>>> >> Thread stack protection is different from stack overflow protection.
>>> >> We do have some stack overflow checking that can be enabled. The
>>> >> thread stack protection means you would have separate MMU/MPU region
>>> >> for each thread's stack, so on  context switch you would enable the
>>> >> heir thread's stack region and disable the executing thread's region.
>>> >> This way, different threads can't access each other's stacks directly.
>>> >
>>> >
>>> > FWIW the thread stack allocator/deallocator plugin support was originally
>>> > added for a user who allocated fixed size stacks to all threads and used
>>> > the MMU to invalidate the addresses immediately before and after each
>>> > stack area.
>>> >
>>> > Another thing to consider is the semantics of protecting a thread. Since
>>> > RTEMS is POSIX single process, multi-threaded, there is an assumption
>>> > that all data, bss, heap, and stack memory is accessible by all threads.
>>> > This means you can protect for out of area writes/reads but can't generally
>>> > protect from one thread accessing another thread's memory. This may
>>>
>>> Right: thread stack protection must be application-configurable.
>>>
>>> > sound like it doesn't happen often but anytime data is returned from a
>>> > blocking call, the contents are copied from one thread's address space
>>> > to another. Message queue receives are one example. I suspect all
>>> > blocking reads do this (sockets, files, etc)
>>> >
>>> It should not happen inside rtems unless the user sends thread-stack
>>> pointers via API calls, so the documentation must make it clear: user
>>> beware!
>>
>>
>> Yep.  Pushing this to the limit could break code in weird, hard
>> to understand ways. The fault handler may have to give a hint. :)
>>
>>>
>>> It may eventually be worth considering (performance-degrading)
>>> extensions to the API that copy buffers between thread contexts.
>>
>>
>> This might be challenging to support everywhere. Say you have mmu_memcpy()
>> or whatever. It has to be used inside code RTEMS owns as well as code that
>> RTEMS is leveraging from third party sources. Hopefully you would at least
>> know the source thread (running) and destination thread easily.
>>
> 
> This brings up the interesting design point, which may have been
> mentioned elsewhere, that is to group/pool threads together in the
> same protection domain. This way the application designer can control
> the sharing between threads that should share, while restricting
> others.

Some years ago I added private stacks on Nios II. The patch was relatively small. I think the event and message queue send needed some adjustments. Also some higher level code was broken, e.g. structures created on the stack with some sort of request/response handling (bdbuf). Doing this with a user extension was not possible. At some point in time you have to run without a stack to do the switching.

On ARMv7-MR it would be easy to add a thread stack protection zone after the thread stack via the memory protection unit.

On a recent project I added MMU protection to the heap (ARM 4KiB pages). You quickly get use after free and out of bounds errors with this. The problem is that each memory allocation consumes at least 4KiB of RAM.

I am not sure if this area is good for GSoC. You need a lot of hardware-specific knowledge and generalizations across architectures are difficult.


More information about the devel mailing list