MSc (by research) involving RTEMS | University of York

Gedare Bloom gedare at rtems.org
Tue Oct 28 05:04:43 UTC 2014


On Mon, Oct 27, 2014 at 4:43 PM, Hesham Moustafa
<heshamelmatary at gmail.com> wrote:
>
>
> On Mon, Oct 27, 2014 at 2:30 PM, Joel Sherrill <joel.sherrill at oarcorp.com>
> wrote:
>>
>>
>>
>> On October 27, 2014 3:04:26 AM PDT, Hesham Moustafa
>> <heshamelmatary at gmail.com> wrote:
>> >Hi all,
>> >
>> >
>> >This year, I am studying MSc (by research) degree at the University of
>> >York. My thesis proposal title is "REAL-TIME OPERATING SYSTEMS FOR
>> >LARGE SCALE MANY-CORE NETWORK-ON-CHIP ARCHITECTURES." Part of this
>> >research will include some work with RTEMS.
>> >
>>
>> Awesome!
>>
>> There is an asymmetric MP patch filed with a PR in bugzilla. I don't
>> remember the architecture but it was a proprietary CPU with virtually
>> nothing useful publicly available. The company was Kalray and here is a link
>> to one of their patches.
>>
>> http://lists.rtems.org/pipermail/bugs/2011-October/003376.html
>>
>> We merged some small stuff from them but they did not work in an open
>> collaborative way with the community. They have a commercial product with a
>> mesh of multi core instances. I suspect the key folks have published papers.
>> Assuming that is vaguely related.
>>
>> How many is many? 16, 64,  etc.. Can you point to some example
>> architectures?
>>
> Given that I first met my supervisor today, I am currently on the stage of
> collecting information, reading literature, and get sense of research work
> that involves RTOS, so that we can choose which RTOS can fit on which
> platform better, and of course that platform would have many cores. I asked
> my supervisor this exact question, and he gave me two initial options (that
> may change after readings): 1) Parallella board variants [1] that can
> contain from 16 up to 64 cores (simple RISC processors) and 2) Bluestar
> project that University of York research group currently works on, it has 64
> cores and I may have the possibility to work on some NoC/SoC HW designs part
> of my research.

I have a 16-core parallela board. I would be willing to help you in
porting, at least in terms of guiding if not coding, although at this
point, you're probably getting good at porting RTEMS! I believe the
"Epiphany" core used by Parallela is an open-core, also.

That said, if there is enough progress on Bluestar that it won't cause
you to have to wait to try using it, then you might consider leaning
on the local experts there. The advantage of Parallela is that it is
already shipping boards commercially, so the tool/community support is
likely to be much better.

>>
>>
>> >That said, I'd appreciate any materials (papers, publications,
>> >references, tutorials, etc) that might be of help regarding that topic
>> >and may or may not relate to RTEMS. I think Sebastian has contributed a
>> >lot to this area recently.
>>
>> Gedare and Sebastian are more in tune with the paper side of things.
>>
>> My suggestion is to divide it into areas. Of the top of my head, there are
>> issues with algorithm scaling as the number of cores increases, locking
>> issues, more need for finer grain locks and lockless data structures, and
>> likely on the application side means to debug and formally make statements
>> about WCET, end to end scheduling correctness in a way that is known to be
>> analyzable for schedulability and correctness, and cache effects.
>>
> Great, I'd like to get ideas of what challenges the current RTOSes like
> RTEMS face regarding many-cores systems, and I think you are the best to
> tell me somethings like that.

As far as RTEMS goes, your best bet is to read
http://www.rtems.org/wiki/index.php/SMP to get a sense of the state of
things. SMP is not yet working with fully-function real-time, but
there aren't any open soruce hard real-time RTOS that I know of that
do support SMP currently. At least, not to a satisfactory degree. (I
admit, I haven't looked hard.) Sebastian may be able to give a better
sense of the state of things, as he has been much closer to it for
longer. I don't know what the commercial RTOS supports.

Some of the biggest challenges are these:
1) No one has really given a good solution for migratory scheduling.
Global scheduling fares poorly. Most solutions explore some kind of
partitioned/clustered scheme to reduce overhead, which trades off
optimality for efficiency. In this area, you should read up on Bjorn
Brandenburg's work.

2) Interrupt handling is a killer in SMP systems. There are more
interrupts that happen now, to deal with cross-core communication, and
also if there is not interrupt affinity then the hw will route the
interrupt wherever it pleases, which is really bad for real-time
system.

A lot of the rest is just the challenge of retro-fitting a legacy
code-base with the necessary synchronization and parallelism-aware
primitives to allow SMP to work, and work well. The "Giant" lock that
RTEMS uses is not conducive to real-time analysis, and until it goes
away, the system is in no way that matters "SMP hard real-time".
However, that is the direction development is going.

>>
>>
>> RTEMS will face the same challenges Solaris, Linux etc faced as the number
>> of cores grew beyond four. So there may be useful experience papers from
>> those.
>>
>> >You may also want to suggest building some simple multi-processor
>> >and/or many-core systems that RTEMS currently supports, and how to
>> >simulate them.
>>
>> Tile may be interesting but you need to look. There should be 12-16 core
>> qoriq ppc by now and qemu might be up to that.
>> Maestro (I think) was a MIPS based SOC out of AFRL which might be
>> interesting.
>>
>> But how many is "many" and me being at my desk will help. I just core
>> dumped in an airport at 5am.
>>
>> What do you want the code to do?
>>
> Initially, we want the code to fit in with many-cores platforms (16-64). So,
> I am trying to see how far research goes regarding this area, and the status
> of some SMP libraries/APIs that current RTOSes support, and start from
> there.
>
> [1] http://www.parallella.org/board/
>>
>> >
>> >Thanks,
>> >
>> >Hesham
>>
>



More information about the devel mailing list