[GSoC] Paravirtualization Layer - test on L4Re
philipp.eppelt at mailbox.tu-dresden.de
Mon Sep 23 13:22:28 UTC 2013
Yes, it looks like it. But I think for each architecture, we can share
the most parts of the BSP and separate the hypervisor specifics.
I don't know much about virtualization on sparc/ppc/arm, so I can't say
anything about these.
On 09/23/2013 03:16 PM, Gedare Bloom wrote:
> Sounds good. Would it be a BSP for each hypervisor for each target CPU
> type the hypervisor runs on?
> On Mon, Sep 23, 2013 at 9:10 AM, Philipp Eppelt
> <philipp.eppelt at mailbox.tu-dresden.de> wrote:
>> in the last days I reused my work on L4RTEMS to do a quick and dirty
>> test of the new virtualization layer.
>> The implementation -which isn't working yet- showed, that we the
>> i386/virtualpok BSP is a very good point to start, but the vCPU
>> interface of L4Re brings it's own dependencies which must be added to
>> include/ and in Makefile.am.
>> I also had to extend the virtualizationalyerbsp.h file with these
>> includes and a structure shared between L4Re and RTEMS. This struct
>> accommodates a vCPU and console capability and a pointer to the vCPU
>> state. They are filled in at start up by L4Re and can then be used by RTEMS.
>> The take away are two things:
>> First, we might end up with an own BSP for each hypervisor.
>> Second, as far as I can see now, they only differ in aspects of the
>> layer, not in the drivers using the layer.
>> The code isn't on github yet, as I am short on time and have to sort
>> things out first. The obstacle at the moment is to create a library in
>> L4Re, which includes all L4Re dependencies and has only a few undefined
>> references, which can be resolved by RTEMS.
>> On 09/20/2013 09:22 AM, Philipp Eppelt wrote:
>>> what did I do in my project?
>>> I designed and implemented a virtualization layer, which should ease the
>>> virtualization of RTEMS across different hypervisors.
>>> To test the layer and because of the ARINC 653 compliance POK was chosen
>>> as a proof-of-concept host OS.
>>> The project was a partial success. The layer is designed, implemented
>>> and a BSP is using it, and it is at least partially working.
>>> I didn't succeed in changing POK so it can forward interrupts to
>>> partitions reliably. But this is an POK related issue, which I think
>>> won't be an issue on a host OS providing a vCPU abstraction. Also
>>> implementing this for other architectures might be easier than for x86.
>>> A console is printing hello World and sometimes under some circumstances
>>> the base_sp sample printed output, too. But the latter is not reliable.
>>> I have documented my efforts, including implementation issues, GDB traps
>>> and where I left off on the wiki page .
>>> Also explanations on how to port the i386/virtualpok BSP to other
>>> hypervisors and how to port this approach to other architectures can be
>>> found there. The latter is pretty abstract, as I don't know much about
>>> the other architectures(arm, ppc, sparc).
>>> I provide two patches:
>>> * Split of the i386 CPU between score/cpu and libcpu. The interrupt
>>> handling was moved to libcpu and two new CPU variants were introduced
>>> there: Native and virtual. The native one works like before but the
>>> virtual one calls the virtualization layer instead of executing cli,sti
>>> or hlt. The list of affected functions is documented in the wiki.
>>> BUT: This patch won't be merged, as includes in cpukit from libcpu
>>> aren't allowed (but it works). But before the discussion about a new
>>> configuration option isn't finished and the option is implemented there
>>> is no other way to achieve this.
>>> * A new i386 BSP is introduced: virtualpok. It is the corresponding BSP
>>> to the virtual i386 CPU model and brings along the virtualization layer
>>> as two header files in it's include/ directory. A console driver, clock
>>> driver and IRQ management is implemented and as far as possible tested
>>> on POK.
>>> If you have questions on the work, I'd be happy to answer them.
>>> rtems-devel mailing list
>>> rtems-devel at rtems.org
>> rtems-devel mailing list
>> rtems-devel at rtems.org
More information about the users