The Hypervisor for RTEMS

张文杰 157724595 at
Sun Aug 14 15:55:11 UTC 2011

Dear all:
Now i will report the progress of GSOC project Hypervisor for RTEMS. In the past weeks there is a small problem about the
schedule inside the Partition OS, and now it has been solved. The issue is caused by the load and store operation of the
value dispatch_necessary. When the new Partition OS switch to run the pal_rtems_isr_dispatcher function will store
dispatch_necessary to true and in the clock ISR dispatch function will load its value to determine whether the thread should
be scheduled to run. The type of value dispatch_necessary is bool and it uses one byte space. But because of my mistake,
i use "st" instruction to store it and use "ldub" instruction to load it,  and the sparc architecture uses big-endian. So you will
know why the New running Partition OS' threads are not scheduled to run after first Partition OS schedule cycle.
After i solve this problem the sample test can run successfully. The Partition OS and Hypervisor are all based on the latest
RTEMS version. About the Partition OS there is an independent BSP for all the SPARC architecture and you just configure
this BSP and make it will be OK. About the Hypervisor its realization is based on the LEON3 BSP with some modifies and also
includes some modifies to RTEMS score source code using IFDEF method. The build environment  is independent from RTEMS
source code, it uses the link of Hypervisor RTEMS and Partition RTEMS source code path to build thethe finalexecutable file.
The Partition OS patch for RTEMS is a BSP so it seems clean to merge into RTEMS. But the Hypervisor patch for RTEMS
includes some modifies with score code so it seems a problem to merge into RTEMS. What about your idea, Joel?  And Tobias,
About the question what code will be submitted? whether there are some License limit about the source code or which part of
source code has limit?, what is your idea?

Best Regards
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the users mailing list