Project for GSoC 2020
Utkarsh Rai
utkarsh.rai60 at gmail.com
Sun Mar 8 02:09:24 UTC 2020
I suspect that the same set of APIs won't work across all architectures and
they would need to be implemented individually for each of them and a
separate set of documentation would have to be provided for all the
architectures for which the solution is implemented.
If this is not implementable for the length of GSoC can I take up this
ticket <https://devel.rtems.org/ticket/3222> as I believe I have a fair
idea of how I can implement it and it seems doable for the length of GSoC.
On Sun, Mar 8, 2020 at 10:06 AM Gedare Bloom <gedare at rtems.org> wrote:
> On Sat, Mar 7, 2020 at 12:00 AM Utkarsh Rai <utkarsh.rai60 at gmail.com>
> wrote:
> >
> > Sorry for the late reply, I was busy with my college tests.
> >
> > I looked into the MMU implementation of a few other RTOSs and zephyr
> RTOS has an interesting memory domain implementation for thread stack
> protection. Basically, the threads are grouped into memory domains as
> 'struct memory_domain'. Each memory domain has 'struct memory_partition'
> which contains the starting address, size and access policy for each
> partition the 'memory_domain' also contains APIs to add/remove the memory
> partition. Can this be implemented? I am not sure about the POSIX
> compliance of this implementation.
> >
> > As was pointed out, a lot of the implementation in this project will be
> architecture-specific. Would it be feasible to take up the implementation
> for a particular architecture(specifically ARM or x86 as I have knowledge
> about their architecture ) as a GSoC project?
> >
> Probably. But any design has to accommodate the many different kinds
> of MMU/MPU support that are avialable, and should also work when there
> is no support (maybe with a warning or an error if an application
> tries to configure thread stack protection for an unsupported
> architecture/BSP).
>
> >
> > On Mon, Mar 2, 2020 at 10:55 PM Sebastian Huber <
> sebastian.huber at embedded-brains.de> wrote:
> >>
> >> ----- Am 2. Mrz 2020 um 17:44 schrieb Gedare Bloom gedare at rtems.org:
> >>
> >> > On Mon, Mar 2, 2020 at 9:37 AM Joel Sherrill <joel at rtems.org> wrote:
> >> >>
> >> >>
> >> >>
> >> >> On Mon, Mar 2, 2020 at 10:12 AM Gedare Bloom <gedare at rtems.org>
> wrote:
> >> >>>
> >> >>> On Mon, Mar 2, 2020 at 9:05 AM Joel Sherrill <joel at rtems.org>
> wrote:
> >> >>> >
> >> >>> >
> >> >>> >
> >> >>> > On Mon, Mar 2, 2020 at 9:33 AM Gedare Bloom <gedare at rtems.org>
> wrote:
> >> >>> >>
> >> >>> >> On Sat, Feb 29, 2020 at 2:58 PM Utkarsh Rai <
> utkarsh.rai60 at gmail.com> wrote:
> >> >>> >> >
> >> >>> >> > I have gone through the details of the project adding memory
> protection using
> >> >>> >> > MMU and have a few questions and observations regarding the
> same-
> >> >>> >> >
> >> >>> >> > 1. Is this a project for which someone from the community
> would be willing to
> >> >>> >> > mentor for GSoC?
> >> >>> >>
> >> >>> >> Yes, Peter Dufault has expressed interest. There are also several
> >> >>> >> generally interested parties that may like to stay in the loop.
> >> >>> >>
> >> >>> >> > 2. As far I could understand by looking into the rtems tree
> ARM uses static
> >> >>> >> > initialization wherein it first invalidates the cache and TLB
> blocks and then
> >> >>> >> > performs initialization by setting up a 1 to 1 mapping of the
> parts of address
> >> >>> >> > space. According to me, static initialization of the mmu is a
> generic enough
> >> >>> >> > method that can be utilized in most of the architectures.
> >> >>> >>
> >> >>> >> Yes, it should be. That is how most architectures will do it, if
> they
> >> >>> >> need to. Some might disable the MMU/MPU and not bother, then
> there is
> >> >>> >> some work to do to figure out how to enable the static/init-time
> >> >>> >> memory map.
> >> >>> >>
> >> >>> >> > 3. For the thread stack protection, I believe either of the
> stack guard
> >> >>> >> > protection approach or by verification of stack canaries
> whereby the OS on each
> >> >>> >> > context switch would check whether the numbers associated with
> the canaries are
> >> >>> >> > still intact or not are worth considering. Although I still
> only have a
> >> >>> >> > high-level idea of both of these approaches and will be
> looking into their
> >> >>> >> > implementation details, I would request your kind feedback on
> it.
> >> >>> >> >
> >> >>> >>
> >> >>> >> Thread stack protection is different from stack overflow
> protection.
> >> >>> >> We do have some stack overflow checking that can be enabled. The
> >> >>> >> thread stack protection means you would have separate MMU/MPU
> region
> >> >>> >> for each thread's stack, so on context switch you would enable
> the
> >> >>> >> heir thread's stack region and disable the executing thread's
> region.
> >> >>> >> This way, different threads can't access each other's stacks
> directly.
> >> >>> >
> >> >>> >
> >> >>> > FWIW the thread stack allocator/deallocator plugin support was
> originally
> >> >>> > added for a user who allocated fixed size stacks to all threads
> and used
> >> >>> > the MMU to invalidate the addresses immediately before and after
> each
> >> >>> > stack area.
> >> >>> >
> >> >>> > Another thing to consider is the semantics of protecting a
> thread. Since
> >> >>> > RTEMS is POSIX single process, multi-threaded, there is an
> assumption
> >> >>> > that all data, bss, heap, and stack memory is accessible by all
> threads.
> >> >>> > This means you can protect for out of area writes/reads but can't
> generally
> >> >>> > protect from one thread accessing another thread's memory. This
> may
> >> >>>
> >> >>> Right: thread stack protection must be application-configurable.
> >> >>>
> >> >>> > sound like it doesn't happen often but anytime data is returned
> from a
> >> >>> > blocking call, the contents are copied from one thread's address
> space
> >> >>> > to another. Message queue receives are one example. I suspect all
> >> >>> > blocking reads do this (sockets, files, etc)
> >> >>> >
> >> >>> It should not happen inside rtems unless the user sends thread-stack
> >> >>> pointers via API calls, so the documentation must make it clear:
> user
> >> >>> beware!
> >> >>
> >> >>
> >> >> Yep. Pushing this to the limit could break code in weird, hard
> >> >> to understand ways. The fault handler may have to give a hint. :)
> >> >>
> >> >>>
> >> >>> It may eventually be worth considering (performance-degrading)
> >> >>> extensions to the API that copy buffers between thread contexts.
> >> >>
> >> >>
> >> >> This might be challenging to support everywhere. Say you have
> mmu_memcpy()
> >> >> or whatever. It has to be used inside code RTEMS owns as well as
> code that
> >> >> RTEMS is leveraging from third party sources. Hopefully you would at
> least
> >> >> know the source thread (running) and destination thread easily.
> >> >>
> >> >
> >> > This brings up the interesting design point, which may have been
> >> > mentioned elsewhere, that is to group/pool threads together in the
> >> > same protection domain. This way the application designer can control
> >> > the sharing between threads that should share, while restricting
> >> > others.
> >>
> >> Some years ago I added private stacks on Nios II. The patch was
> relatively small. I think the event and message queue send needed some
> adjustments. Also some higher level code was broken, e.g. structures
> created on the stack with some sort of request/response handling (bdbuf).
> Doing this with a user extension was not possible. At some point in time
> you have to run without a stack to do the switching.
> >>
> >> On ARMv7-MR it would be easy to add a thread stack protection zone
> after the thread stack via the memory protection unit.
> >>
> >> On a recent project I added MMU protection to the heap (ARM 4KiB
> pages). You quickly get use after free and out of bounds errors with this.
> The problem is that each memory allocation consumes at least 4KiB of RAM.
> >>
> >> I am not sure if this area is good for GSoC. You need a lot of
> hardware-specific knowledge and generalizations across architectures are
> difficult.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20200308/e7f68df4/attachment.html>
More information about the devel
mailing list