AW: Running two RTEMS instances on two RISC-V harts

Schweikhardt, Jens (TSPCE6-TL5) Jens.Schweikhardt at tesat.de
Mon Oct 24 12:48:39 UTC 2022


Hi Jan,

well, in the beginning there was a system with just one telescope. Now a second shall be added.
Easy as 1-2-3: duplicate the software and run it on two cores. No SW changes needed.
Developers like me are expensive J That’s how corporate thinks.

Driving a telescope involves not just a single driver, it’s a symphony of 10 tasks
juggling tele-commands and telemetry data. Turning a singleton application into a
multi-telescope app would probably require extensive changes. There is a lot of data
that needs to be part of that context struct you suggest. Running two “unmodified”
RTEMS instances would make that easy: the context structs are the data segments.
(Which is why running two RTEMS instances from the /same/ address does NOT work:
they overwrite their data segment objects and trash each others stacks, etc. Can
this be avoided with linker script magic?).

About the disclaimer, I think I can get rid of the first one, the second is due to
corporate lawyers fearing getting their pants sued off and auto-appended by the mailhost. GRRR.

Thanks for taking the time to help me find a way how to proceed.

Jens


Von: Jan.Sommer at dlr.de <Jan.Sommer at dlr.de>
Gesendet: Montag, 24. Oktober 2022 14:13
An: Schweikhardt, Jens (TSPCE6-TL5) <Jens.Schweikhardt at tesat.de>; users at rtems.org
Betreff: RE: Running two RTEMS instances on two RISC-V harts


Externe E-Mail: Bitte Absender und Inhalt der Mail prüfen, bevor Dateianhänge oder Links geöffnet werden!
Hi Jens,

Is there a real need to have the telescope driver twice in memory?
Would it be enough to implement the driver as “object oriented” in C?
Then you would just need to create a context struct during initialization based on the hartid with the correct interrupt assignment and base address of the memory mapped registers.

At least that is a common pattern in the driver implementations of RTEMS.

Best regards,

    Jan

PS: It is probably a good idea to remove the disclaimer in your email footer before posting to a public mailinglist.

From: users <users-bounces at rtems.org<mailto:users-bounces at rtems.org>> On Behalf Of Schweikhardt, Jens (TSPCE6-TL5)
Sent: Montag, 24. Oktober 2022 13:53
To: 'users at rtems.org' <users at rtems.org<mailto:users at rtems.org>>
Subject: Running two RTEMS instances on two RISC-V harts

hello, world\n

we’re currently in the design phase for a rocketchip RISC-V project with two harts.
Think a common HW platform where each hart drives a separate telescope unit.
The C code for each telescope is basically identical with the exception of memory mapped registers and interrupts.
Our idea is to compile the C code twice and link with different linkerscripts, separating the images in RAM
via different MEMORY { RAM: ORIGIN = 0x######## } declarations. A bootloader loads the images
to different addresses and branches depending on mhartid, starting two separate RTEMS applications, each
of which runs 10 tasks. There’s no communication between the RTEMS instances.

Are we attempting something crazy (too complex) and we should instead look into <your suggestion here>?

If not, is there a simple way to compile an app twice for different RAM: ORIGIN values?
I tried using a Linkerscript with -Wl,-T,Linkerscript with a different MEMORY but this draws a warning such as
warning: redeclaration of memory region `RAM'
and uses the value in the BSPs  linkcmds file. It looks like the last declaration wins and the linkcmds is
always read after linker scripts specified by my -Wl command line option.

________________________________

Tesat-Spacecom GmbH & Co. KG
Sitz: Backnang; Registergericht: Amtsgericht Stuttgart HRA 270977
Persoenlich haftender Gesellschafter: Tesat-Spacecom Geschaeftsfuehrungs GmbH;
Sitz: Backnang; Registergericht: Amtsgericht Stuttgart HRB 271658;
Geschaeftsfuehrung: Thomas Reinartz, Kerstin Basche, Ralph Schmid

[banner]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/users/attachments/20221024/c0353ba6/attachment-0001.htm>


More information about the users mailing list