Memory Protection project interface details (GSoC 2020)

Gedare Bloom gedare at rtems.org
Tue May 12 20:51:39 UTC 2020


On Tue, May 12, 2020 at 10:58 AM Utkarsh Rai <utkarsh.rai60 at gmail.com> wrote:
>
>
>
> On Tue, May 12, 2020 at 9:27 AM Gedare Bloom <gedare at rtems.org> wrote:
>>
>> On Thu, May 7, 2020 at 9:59 PM Hesham Almatary
>> <hesham.almatary at cl.cam.ac.uk> wrote:
>> >
>> > Hello Utkarsh,
>> >
>> > I'd suggest you don't spend too much efforts on setting up BBB
>> > hardware if you haven't already. Debugging on QEMU with GDB is way
>> > easier, and you can consider either qemu-xilinx-zynq-a9 or rpi2 BSPs.
>> > Later, you can move your code to BBB if you want, since both are based
>> > on ARMv7.
>> +1
>>
>> Past work has also used psim successfully I thought? Or am I mistaken there.
>>
>> >
>> > On Thu, 7 May 2020 at 18:26, Utkarsh Rai <utkarsh.rai60 at gmail.com> wrote:
>> > >
>> > > Hello,
>> > > This is to ensure that all the interested parties are on the same page before I start my project and can give their invaluable feedback.
>> Excellent, thank you for getting the initiative.
>>
>> I'll be taking on the primary mentorship for your project, with
>> support from the co-mentors (Peter, Hesham, Sebastian). For now, I
>> prefer you to continue to make your presence on the mailing list. We
>> will establish other forms of communication as needed and will take on
>> IRC meetings once coding begins in earnest.
>>
>> > > My GSoC project, providing user-configurable thread stack protection, requires adding architecture-specific low-level support as well as high-level API support. I will be starting my project with ARMv7-A (on BBB) based MMU since RTEMS already has quite mature support for it. As already mentioned in my proposal I will be focusing more on the High-level interface and let it drive whatever further low-level support is needed.
>> > > Once the application uses MMU for thread stack address generation each thread will be automatically protected as the page tables other than that of the executing thread would be made dormant. When the user has to share thread stacks they will have to obtain the stack attributes of the threads to be shared by pthread_attr_getstack() and then get a file descriptor of the memory to be mapped by a call to shm_open() and finally map this to the stack of the other thread through
>> > > mmap(), this is the POSIX compliant way I could think of. Now at the low level, it means mapping the page table of the thread to be shared into the address space of the executing thread. This is an area where the low-level support has to be provided. At the high-level, this means providing support to mmap and shared-memory interface as mmap provides support for a file by simply
>> > > copying the memory from the file to the destination. For shared memory objects it can
>> > > provide read/write access but cannot provide restriction of write/read access. One of the areas that I have to look into more detail is thread context-switch, as after every context switch the TLBs need to be flushed and reinitialized lest we get an invalid address for the executing thread. Since context-switch is low-level architecture-specific, this also has to be provided with more support.
>>
>> This is really dense text. Try to break apart your writing a little
>> bit to help clarify your thoughts.  You should also translate some of
>> your proposal into a wiki page if you haven't started that yet, and a
>> blog post. Both of those will help to focus your thoughts into words.
>>
>> "mapping the page table" is not meaningful to me. I think you mean
>> something like "mapping a page from the page table"?
>>
>> Will the design
>> support sharing task stacks using MPUs with 4 regions? 8?  (It seems
>> challenging to me, but might be possible in some limited
>> configurations. Having support for those kinds of targets might still
>> be useful, with the caveat that sharing stacks is not possible.)
>
>
> I will have to look into this in a bit more detail before I can give you a comprehensive answer.
>
>>
>> The first step is to get a BSP running that has sufficient
>> capabilities for you to test out memory protection with. Do a little
>> bit of digging, but definitely simulation is the way to go.
>
>
> As suggested by Hesham, I have been able to run the qemu-Xilinx-zynq-a9 BSP on qemu and I have learned how to debug it through GDB.
> The BSP supports memory protection and as pointed out once I get it done on this, I can move my code for other BSP with ARMv7 (RPI, BBB ).
>
>>
>> The second step from my perspective is to determine how to introduce
>> strict isolation between task stacks. Don't worry about sharing at
>> this stage, but rather can you completely isolate tasks? Then you can
>> start to poke holes in the isolation.
>
>
>  My understanding of this is that to completely isolate the tasks and page tables of the task will be placed in two separate regions (user and system mode respectively), the page tables will be accessed only by the kernel code and not by the application. This will prevent dormant tasks from interfering with the current task through the application code.
>

We don't have a privilege separation in rtems, and we don't want to
add one in this project. When an application calls into rtems there
should not need to be a page table switch. The only time the
protection hw should be manipulated is a context switch. And that
should be configurable.

The way I see it, a page/region should be allocated for each task
stack, and that should be the only entry that changes during a context
switch. The goal is to prevent accidental (non-malicious) task stack
interference. The shm/mmap interface will be used to allow sharing
that entry with other tasks.

In the future we may be able to move toward more comprehensive memory
protection, but I suspect if we try to get it all in one go the scope
will be way too big for you to attempt over the summer.

>> As you say, you'll also need to start to understand the context switch
>> code. Start looking into it to determine where you might think to
>> implement changing the address space of the executing thread. Another
>> challenge is that RTEMS can dispatch to a new task from the interrupt
>> handler, which may cause some problems for you as well to handle.
>
>
>   The above way of page-table and task separation should handle the interrupt case as most of the interrupts are executed in the system mode.
>
No. There is a subtle problem when you dispatch to a new task while
handling an interrupt for a prior task. You need to be prepared to
return to a new address space context. Dispatching from an ISR is
different from context switching, and I think there will be some
problems to deal with there. Further, it is a common design pattern to
share memory between interrupt handlers and application tasks (e.g.,
an event queue) so we need to be careful what we assume is/isn't
available in the interrupt context.

>> Have you figured out where in the code thread stacks are allocated?
>> How do you plan to make the thread stacks known to other threads?
>
>
> If I understand your question correctly this would be useful during stack sharing. Stack sharing, as I described earlier would be through explicit calls to mmap(), shm_open(). For obtaining the thread stack size and address the user will first have to make a call to pthread_attr_getstack*(). Now, this can return the attributes the user explicitly sets during pthread_create(), or as the RTEMS docs describe the memory will be allocated from the RTEMS workspace area as described here. I will have to look into it more detail as this is BSP specific and any stack configuration will have to be done after the initialization of the MPU.
>
>>
>> TLB shootdown can be extremely expensive. Try to find ways to optimize
>> that cost earlier rather than later. (One of those cases where
>> premature optimization will be acceptable.) Tagged TLB architectures
>> or those with "superpages" may incur less overhead if you can
>> selectively shoot-down the entry (entries) used for task stacks.
>
>
> Added to my TO-DO list.
>>
>>
>> A final thought is that the method to configure this support is
>> necessary. Configuration is undergoing some heavy changes lately, and
>> application-level configuration is going to be completely different in
>> rtems6. You may want to consider raising a new thread with CC to
>> Sebastian to get his input on how the best way to configure something
>> like this might look, now and in the future. I would have leaned
>> toward a high-level configure switch (--enable-task-protection) in the
>> past, but now I don't know.  This capability is however something that
>> should be considered disabled by default due to the extra overhead.
>>
>>
>> Gedare
>>
>> > > Kindly provide your feedback if I have missed something or I have a wrong idea about it.
>> > >
>> > > Regards,
>> > > Utkarsh Rai.
>> > >
>> > > _______________________________________________
>> > > devel mailing list
>> > > devel at rtems.org
>> > > http://lists.rtems.org/mailman/listinfo/devel
>> > _______________________________________________
>> > devel mailing list
>> > devel at rtems.org
>> > http://lists.rtems.org/mailman/listinfo/devel


More information about the devel mailing list