RTEMS in AMP mode

Sebastian Huber sebastian.huber at embedded-brains.de
Fri Oct 12 10:05:16 UTC 2018


On 12/10/2018 11:57, Thawra Kadeed wrote:
> On 2018-10-12 07:49, Sebastian Huber wrote:
>> On 11/10/2018 17:32, Thawra Kadeed wrote:
>>> Thanks a lot Sebastian for your answer.
>>>
>>> Actually, we are planning to run a multi-core system using a 
>>> network-on-chip and network interfaces as a physical interconnect 
>>> between cores instead of the bus interconnect. I understood that 
>>> MPCI by the OS maybe stay the same even if we change the RTEMS 
>>> version. Is that right?
>>
>> Probably yes, but I guess the MPCI support is not in wide spread use
>> currently. It will be definitely easier if you use the same RTEMS
>> version on all nodes. One option would be to use RTEMS SMP instead.
>>
>>>
>>> On the other hand, I was looking for a very old version of RTEMS and 
>>> I found in this site "https://git.rtems.org/rtems/tag/?h=3.5.1" a 
>>> version from 1996
>>> which is really lightweight. However, I have not seen any support 
>>> for ARM cortex A architecture.
>>
>> The ARM Cortex-A didn't exist in 1996.
>>
>>>
>>> The issue is actually that the current version of RTEMS is hard to 
>>> predict and analyze because it supports many features to provide 
>>> high performance like POSIX and others. in the real-time part, we 
>>> need very lightweight RTEMS excluding all other features where the 
>>> RTEMS kernel does not exceed 1 MB.
>>>
>>> So what do you recommend us? could we use the current version 
>>> excluding all other advantages from the RTEMs kernel where we 
>>> minimize the kernel as much as possible?
>>
>> RTEMS is a library and it was designed so that only parts needed by
>> the application end up in the executable. This improved over time, so
>> RTEMS 5 is probably the best RTEMS in terms of modularity. RTEMS 5
>> supports transitive priority inheritance. This makes it a bit more
>> complex compared to earlier RTEMS versions. It would be possible to
>> bring back the simplified priority inheritance support if there is a
>> real need for this.
>>
>> If your limit is 1MiB, this all doesn't matter. This is more than
>> enough for the operating system core. If you need the new network
>> stack (libbsd), then you may need more (about 2MiB).
>
> So the current release I have seen in the website is RTEMS 4.11.3 (so 
> you mean RTEMS 5 will come later), and the RTEMS core (the library) is 
> in the following path:
> "rtems-4.11.3/c/src/librtems++"
>
> and it is very lightweight. So we can now just compile the current 
> release excluding other features like POSIX and rtemsbsp and then we 
> will get a compiled kernel which includes only the basic stuff which 
> keeps the RTEMS core less than 1MB. Is that right?
>
> If so, we do not really need different versions. We have two parts in 
> our platform best effort (BE) and safety critical (SC). In the BE, we 
> can use the compiled 4.11.3 with all features. In SC we can use the 
> compiled 4.11.3 excluding all additional features. That way we use the 
> same basic kernel for both parts and I think we will not see any 
> integration difficulty.
>
> The idea of using AMP is that we give an opportunity to the BE part to 
> use the high-performance perspective from RTEMS, and SC part to use 
> the very simple RTEMS to be easily analyzable. 

The RTEMS 4.11 release is nearly two years old an in terms of modularity 
and size there are some improvements in the latest development version 
which will be the RTEMS 5 release eventually. So, I would use the RTEMS 
master:

git://git.rtems.org/rtems.git

https://docs.rtems.org/branches/master/user/index.html

In RTEMS there is no user/kernel space separation. The application and 
the operating system is a single executable.

-- 
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax     : +49 89 189 47 41-09
E-Mail  : sebastian.huber at embedded-brains.de
PGP     : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.




More information about the users mailing list