Compiling POSIX based (hypercall + Rump Kernel) to run above RTEMS/POSIX issue

Gedare Bloom gedare at rtems.org
Mon Mar 23 16:59:53 UTC 2015


On Mon, Mar 23, 2015 at 12:41 PM, Joel Sherrill
<joel.sherrill at oarcorp.com> wrote:
>
>
> On March 23, 2015 10:31:10 AM CDT, Antti Kantee <pooka at iki.fi> wrote:
>>On 23/03/15 14:30, Hesham ALMatary wrote:
>>> Hi all,
>>>
>>> Thanks Antti for your reply. I'm reviving this thread as Dr.Joel
>>suggested.
>>>
>>> On Thu, Feb 26, 2015 at 1:11 AM, Antti Kantee <pooka at iki.fi> wrote:
>>>> Hesham,
>>>>
>>>> [sorry for the slight delay, there were some mailing list snafus on
>>this
>>>> end]
>>
>>[aaaand due the those problems, the rump kernel mailing list migrated
>>meanwhile.  I'm sending my reply to the new list at
>>rumpkernel-users at freelists.org.  Everyone replying please use the new
>>list.]
>>
>>>> There are essentially 3 choices:
>>>>
>>>> 1) teach buildrump.sh to run ./configure like the RTEMS
>>cross-compiler
>>>> expects it to be run (assuming it's somehow possible to run
>>./configure
>>>> scripts against the RTEMS xcompiler; I'm not familiar with RTEMS so
>>I can't
>>>> comment)
>>>> 2) hardcode the values for RTEMS.  I don't really prefer this
>>option, since
>>>> it adds to maintenance overhead and potential breakage; the reason I
>>>> originally added the autoconf goop was to get rid of the long-time
>>hardcoded
>>>> values causing the maintenance load.
>>>> 3) skip the POSIX hypervisor entirely and go directly for
>>implementing the
>>>> hypercalls on the low level
>>>>
>>> I agree with this choice, however Dr. Joel sees that integrating Rump
>>> Kernels above RTEMS/POSIX would be more stable. Antti may have some
>>> words here.
>>
>>There are actually two points you need to integrate to, since the stack
>>
>>will look something like this:
>>
>>RTEMS application
>>========1========
>>rump kernel
>>========2========
>>RTEMS core
>>
Great examples. When the rump kernel should be accessible from within
RTEMS core, we need to provide a non-POSIX implementation due to RTEMS
software architecture requirements to avoid POSIX calls within the
core. (Despite that they are possible due to being a SASOS, we don't
allow them.)

>>By "application" I mean anything that would want to use drivers
>>provided
>>by a rump kernel.  It can be a regular POSIX application, in which case
>>
>>"1" would be system calls, or for example the networking stack wanting
>>to use some NIC device driver.  What "1" looks like will vary hugely
>>based on that.  For syscalls it's nice and stable, but for any
>>integration point inside the kernel, not so much.  For example, when
>>using rump kernels to run file system drivers as microkernel style
>>servers on NetBSD, "1" is the VFS/vnode interface.  In that particular
>>scenario, the VFS/vnode interface changing doesn't cause trouble
>>because
>>both the code using it and providing it is hosted in the NetBSD tree.
>>The situation would be different if you want to interface from RTEMS on
>>
>>that level.
>
> I think that's the primary use case for RTEMS users. Accessing filesystems and other services from otherwise "normal" RTEMS applications.  So figuring out how to do that is a goal.
>
In this case, we don't really want to go back into the POSIX layer, in
case the user accesses the RTEMS services in a different way, or in
case we want to access the rump kernel from within the RTEMS "core" or
arch-specific pieces.

>
>>By "core" I mean anything that provides the necessary support for a
>>rump
>>kernel to run.  It is absolutely true that using POSIX as the interface
>>
>>to the core is interface-wise more stable than hooking directly into
>>the
>>RTEMS entrails -- I assume RTEMS entrails can change.  However, the
>>interfaces required for implementing "2" (a.k.a. the rump kernel
>>hypercall interface) are so simple that I wouldn't consider that
>>stability the decisive metric.
>
> The way RTEMS is structured the APIs for concurrency and synchronization are thin wrappers (direct calls) over core services. Since there is no separation of user and kernel space, there isn't the overhead you might find in a traditional POSIX system.
>
> Also you have to provide a wrapper above our core because the core is just really a collection of classes. The APIs combine and tailor those as well as aggregating them with an object instance so they have IDs and optional names.
>
> There may be value in bypassing this or providing a fourth API set for RTEMS as a peer but that would still just be a façade over core services. But this would end up being special code and more of it than a POSIX wrapper.
>
> Personally I always viewed this as a way to get another network stack, filesystems, etc and make them part of regular RTEMS capabilities. How it is implemented under the hood is secondary to what the user gains that we don't have otherwise. So seamless integration into our system call layer is valuable.
>
To get the seamless integration, we'll need to provide somethign
closer to the "fourth API set", but not expose it as an API. The
hypercall interface should be accessible within the RTEMS core, which
means it can't just be something that gets layered above POSIX. For
users who want to use a rump kernel directly, they can use the
POSIX-interface, or we can provide access to the internal RTEMS-based
hypercall implementation if we want.

>>Anyway, if we talking in the context of a summer of code project, I
>>don't think testing both approaches for "2" is an unspeakable amount of
>>
>>work, especially given that barring the code compilation issues running
>>
>>on top of POSIX should already just work.  The more open-ended question
>>
>>is what to do with "1".
>>
>>   - antti
>
> --joel
> _______________________________________________
> devel mailing list
> devel at rtems.org
> http://lists.rtems.org/mailman/listinfo/devel



More information about the devel mailing list