Compiling POSIX based (hypercall + Rump Kernel) to run above RTEMS/POSIX issue
Joel Sherrill
joel.sherrill at oarcorp.com
Mon Mar 23 16:41:41 UTC 2015
On March 23, 2015 10:31:10 AM CDT, Antti Kantee <pooka at iki.fi> wrote:
>On 23/03/15 14:30, Hesham ALMatary wrote:
>> Hi all,
>>
>> Thanks Antti for your reply. I'm reviving this thread as Dr.Joel
>suggested.
>>
>> On Thu, Feb 26, 2015 at 1:11 AM, Antti Kantee <pooka at iki.fi> wrote:
>>> Hesham,
>>>
>>> [sorry for the slight delay, there were some mailing list snafus on
>this
>>> end]
>
>[aaaand due the those problems, the rump kernel mailing list migrated
>meanwhile. I'm sending my reply to the new list at
>rumpkernel-users at freelists.org. Everyone replying please use the new
>list.]
>
>>> There are essentially 3 choices:
>>>
>>> 1) teach buildrump.sh to run ./configure like the RTEMS
>cross-compiler
>>> expects it to be run (assuming it's somehow possible to run
>./configure
>>> scripts against the RTEMS xcompiler; I'm not familiar with RTEMS so
>I can't
>>> comment)
>>> 2) hardcode the values for RTEMS. I don't really prefer this
>option, since
>>> it adds to maintenance overhead and potential breakage; the reason I
>>> originally added the autoconf goop was to get rid of the long-time
>hardcoded
>>> values causing the maintenance load.
>>> 3) skip the POSIX hypervisor entirely and go directly for
>implementing the
>>> hypercalls on the low level
>>>
>> I agree with this choice, however Dr. Joel sees that integrating Rump
>> Kernels above RTEMS/POSIX would be more stable. Antti may have some
>> words here.
>
>There are actually two points you need to integrate to, since the stack
>
>will look something like this:
>
>RTEMS application
>========1========
>rump kernel
>========2========
>RTEMS core
>
>By "application" I mean anything that would want to use drivers
>provided
>by a rump kernel. It can be a regular POSIX application, in which case
>
>"1" would be system calls, or for example the networking stack wanting
>to use some NIC device driver. What "1" looks like will vary hugely
>based on that. For syscalls it's nice and stable, but for any
>integration point inside the kernel, not so much. For example, when
>using rump kernels to run file system drivers as microkernel style
>servers on NetBSD, "1" is the VFS/vnode interface. In that particular
>scenario, the VFS/vnode interface changing doesn't cause trouble
>because
>both the code using it and providing it is hosted in the NetBSD tree.
>The situation would be different if you want to interface from RTEMS on
>
>that level.
I think that's the primary use case for RTEMS users. Accessing filesystems and other services from otherwise "normal" RTEMS applications. So figuring out how to do that is a goal.
>By "core" I mean anything that provides the necessary support for a
>rump
>kernel to run. It is absolutely true that using POSIX as the interface
>
>to the core is interface-wise more stable than hooking directly into
>the
>RTEMS entrails -- I assume RTEMS entrails can change. However, the
>interfaces required for implementing "2" (a.k.a. the rump kernel
>hypercall interface) are so simple that I wouldn't consider that
>stability the decisive metric.
The way RTEMS is structured the APIs for concurrency and synchronization are thin wrappers (direct calls) over core services. Since there is no separation of user and kernel space, there isn't the overhead you might find in a traditional POSIX system.
Also you have to provide a wrapper above our core because the core is just really a collection of classes. The APIs combine and tailor those as well as aggregating them with an object instance so they have IDs and optional names.
There may be value in bypassing this or providing a fourth API set for RTEMS as a peer but that would still just be a façade over core services. But this would end up being special code and more of it than a POSIX wrapper.
Personally I always viewed this as a way to get another network stack, filesystems, etc and make them part of regular RTEMS capabilities. How it is implemented under the hood is secondary to what the user gains that we don't have otherwise. So seamless integration into our system call layer is valuable.
>Anyway, if we talking in the context of a summer of code project, I
>don't think testing both approaches for "2" is an unspeakable amount of
>
>work, especially given that barring the code compilation issues running
>
>on top of POSIX should already just work. The more open-ended question
>
>is what to do with "1".
>
> - antti
--joel
More information about the devel
mailing list