RTEMS in a Sparc LEON3 multiprocessor system.

Gustav Kynnefjäll gustav.kynnefjall at gmail.com
Thu May 7 08:13:04 UTC 2009


Hello.
Im doing a thesis about LEON3 as a multiprocessor and would really like to
use RTEMS.
However RTEMS for LEON3 are still not modified for multiprocessor.
________________________________________________________
/rtems-4.10/c/src/lib/libbsp/sparc/leon3/readme
"This BSP supports* single* LEON3-processor system"
________________________________________________________
Are this a correct  interpretation of the readme file?

If so, how much work is it to change RTEMS so it will function with LEON3 in
a multiprocessor system?
Have some one done this before and how is it done?
A tip where to read I would be really grateful.

Best regards
Gustav


2009/5/6 <rtems-users-request at rtems.org>

> Send rtems-users mailing list submissions to
>        rtems-users at rtems.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        http://www.rtems.org/mailman/listinfo/rtems-users
> or, via email, send a message with subject or body 'help' to
>        rtems-users-request at rtems.org
>
> You can reach the person managing the list at
>        rtems-users-owner at rtems.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of rtems-users digest..."
>
>
> Today's Topics:
>
>   1. Re: rename issue (Chris Johns)
>   2. Re: rename issue (Ralf Corsepius)
>   3. Re: NFS against UDP (Till Straumann)
>   4. Re: NFS against UDP (Leon Pollak)
>   5. Code sharing in shared memory systems? (Axel von Engel)
>   6. Re: Code sharing in shared memory systems? (Joel Sherrill)
>   7. warning question (Joel Sherrill)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 06 May 2009 10:10:57 +1000
> From: Chris Johns <chrisj at rtems.org>
> Subject: Re: rename issue
> To: Joel Sherrill <joel.sherrill at oarcorp.com>
> Cc: RTEMS Users <rtems-users at rtems.org>,        Ralf Corsepius
>        <ralf.corsepius at rtems.org>
> Message-ID: <4A00D591.8020708 at rtems.org>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Joel Sherrill wrote:
> > Ralf Corsepius wrote:
> >> Joel Sherrill wrote:
> >>
> >>> Looks like we are heading for a new spin of the 4.10 tools
> >>> soon.
> >>>
> >>>
> >> Yep, .... I am going to address these issues sequentially.
> >>
> >>>   + drop - DMISSING_SYSCALL_NAMES from configure.host
> >>>
> >>>
> >> Having cross-checked your proposal, I leaned to agree with your proposal
> >> and am about to launch a toolchain spin.
> >>
> >> Please test this toolchain! Though this patch is a one-liner, this step
> >> is quite intrusive, and is not unlikely to have (so far) unconsidered
> >> side-effects.
> >>
> >>
> > Yeah!  This one worried me.  It could easily turn up a LOT of
> > stuff.
>
> What should we be looking for ?
>
> >>>   + inttypes.h warning
> >>>
> >>>
> >> WIP on my part, but no patch available so far.
> >>
> > OK.  Chris and I were on a warning hunt and these showed up
> > in cpukit.  They will disappear again when you get the patch.
> >
>
> Which warnings will this fix ?
>
> Regards
> Chris
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 06 May 2009 02:36:18 +0200
> From: Ralf Corsepius <ralf.corsepius at rtems.org>
> Subject: Re: rename issue
> To: Chris Johns <chrisj at rtems.org>
> Cc: RTEMS Users <rtems-users at rtems.org>
> Message-ID: <4A00DB82.6010301 at rtems.org>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> Chris Johns wrote:
> > Joel Sherrill wrote:
> >
> >> Ralf Corsepius wrote:
> >>
> >>> Joel Sherrill wrote:
> >>>
> >>>
> >>>> Looks like we are heading for a new spin of the 4.10 tools
> >>>> soon.
> >>>>
> >>>>
> >>>>
> >>> Yep, .... I am going to address these issues sequentially.
> >>>
> >>>
> >>>>   + drop - DMISSING_SYSCALL_NAMES from configure.host
> >>>>
> >>>>
> >>>>
> >>> Having cross-checked your proposal, I leaned to agree with your
> proposal
> >>> and am about to launch a toolchain spin.
> >>>
> >>> Please test this toolchain! Though this patch is a one-liner, this step
> >>> is quite intrusive, and is not unlikely to have (so far) unconsidered
> >>> side-effects.
> >>>
> >>>
> >>>
> >> Yeah!  This one worried me.  It could easily turn up a LOT of
> >> stuff.
> >>
> >
> > What should we be looking for ?
> >
> Symbol clashes/conflicts related to "_"-prefixed function symbols and
> bogus/redundant <function>_r vs. <function> calls.
>
> >
> >>>>   + inttypes.h warning
> >>>>
> >>>>
> >>>>
> >>> WIP on my part, but no patch available so far.
> >>>
> >>>
> >> OK.  Chris and I were on a warning hunt and these showed up
> >> in cpukit.  They will disappear again when you get the patch.
> >>
> >>
> >
> > Which warnings will this fix ?
> >
>
> Joel's test case had been this:
>
> ../../../../../../rtems/c/src/../../cpukit/libmisc/shell/main_mwdump.c:65:
> >> warning: format '%08llX' expects type 'long long unsigned int', but
> >> argument
> >> 2 has type 'uintptr_t'
>
>
> It shows with newlib-1.17.0 based toolchains (rtems-4.10) and can be
> reproduced with this test case:
>
>
> #include <inttypes.h>
> #include <stdio.h>
>
> extern uintptr_t x;
>
> void doit( void ) {
>  printf("0x%08" PRIXPTR " ",addr);
> }
>
> The origin of this warning is PRIXPTR being defined to "llX" on some
> targets, while uintptr_t is a (32bit) "long".
>
> The warning had been introduced by an upstream newlib patch, which was
> supposed to provide better c99/POSIX compliance. However,  with this
> patch applied, gcc now complains about it.
>
> So, the question, I don't know the anser to, is: Whose fault is this
> warning? Is GCC not sufficently c99/POSIX compliant or is newlib bugged?
>
> newlib-1.16.0 (rtems-4.9 toolchains) don't have the patch in question
> applied, and are not subject to this issue.
>
> Ralf
>
>
>
>
>
>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 06 May 2009 02:30:27 -0700
> From: Till Straumann <strauman at slac.stanford.edu>
> Subject: Re: NFS against UDP
> To: Leon Pollak <leonp at plris.com>
> Cc: rtems-users at rtems.org
> Message-ID: <4A0158B3.9090103 at slac.stanford.edu>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Leon Pollak wrote:
> > Hello, all.
> >
> >
> > I need a help from the list members, gurus in networking, as I am too
> > weak in this.
> >
> >
> > I have today an application under RTEMS speaking with another
> > application via UDP. Most of the time data exchange via our simple UDP
> > protocol is transferring big amounts of data, the rest are some commands.
> > The amounts of data are so big (for the system) that most of the time
> > CPU is processing the BSD stack operations.
> >
> >
> > The customer now insists on introducing the NFS. I object, explaining
> > my objection mostly by significant increasing of required CPU power
> > and traffic to perform the NFS protocol.
> >
> >
> > The question is - am I right? Do someone has a similar experience? How
> > can I estimate the load growth even VERY roughly?
> A few hints (NFS read operation only; write is very similar)
>
> - rtems NFS implementation is fully synchronous (no read-ahead,
>  i.e., request block X, wait until block X received). Probably your
>  raw UDP communication is similar but if it does implement
>  read-ahead / caching then expect significant slowdown when
>  using NFS.
>
> - For every block read, NFS requires XDR-decoding a protocol
>  header. This overhead is relatively small, especially if a large
>  block size is used (you want 8k / max. allowed by UDP). A large
>  block size also speeds up synchronous operation.
>
> - The payload data is copied verbatim (w/o any byte-swapping
>  to the user's buffer). If the 'xdr_mbuf' stream is used then
>  the cost is similar to using normal UDP (copying from mbufs
>  into user buffer) -- otherwise data are copied twice (from
>  mbufs to NFS memory buffer, then again from there to user
>  buffer.
>  The behavior is compile-time configurable (nfsclient/src/rpcio.c)
>  -- by default a second copy operation is avoided.
>
> -> If you use a large block size, read relatively large files
>    (>> NFS protocol header) and your current implementation
>    does not implement read-ahead/caching then I'd be surprised
>    if NFS is a lot slower. Nevertheless, you want to run a few
>    tests...
>
> HTH
> -- Till
> >
> >
> > Many thanks for any comment/help.
> >
> >
> > --
> > Leon
> >
> >
> > ------------------------------------------------------------------------
> >
> > _______________________________________________
> > rtems-users mailing list
> > rtems-users at rtems.org
> > http://www.rtems.org/mailman/listinfo/rtems-users
> >
>
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 6 May 2009 14:46:21 +0300
> From: Leon Pollak <leonp at plris.com>
> Subject: Re: NFS against UDP
> To: "Undisclosed.Recipients": ;
> Cc: rtems-users at rtems.org
> Message-ID: <200905061446.22078.leonp at plris.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Wednesday May 6 2009, Till Straumann wrote:
> > A few hints (NFS read operation only; write is very similar)
> >
> > - rtems NFS implementation is fully synchronous (no read-ahead,
> >   i.e., request block X, wait until block X received). Probably your
> >   raw UDP communication is similar but if it does implement
> >   read-ahead / caching then expect significant slowdown when
> >   using NFS.
> Thanks for the tip, I shall take this into account.
> But most of the time we do writing, not reading...
>
>
> > - For every block read, NFS requires XDR-decoding a protocol
> >   header. This overhead is relatively small, especially if a large
> >   block size is used (you want 8k / max. allowed by UDP). A large
> >   block size also speeds up synchronous operation.
> Yes, but as i was able to understand from the last e-mails on the list, the
> Jumbo packets are not supported "out-of-the-box" today. Or am I wrong?
>
>
> > - The payload data is copied verbatim (w/o any byte-swapping
> >   to the user's buffer). If the 'xdr_mbuf' stream is used then
> >   the cost is similar to using normal UDP (copying from mbufs
> >   into user buffer) -- otherwise data are copied twice (from
> >   mbufs to NFS memory buffer, then again from there to user
> >   buffer.
> >   The behavior is compile-time configurable (nfsclient/src/rpcio.c)
> >   -- by default a second copy operation is avoided.
> Great. Where can I read about all this?
>
>
> > -> If you use a large block size, read relatively large files
> >     (>> NFS protocol header) and your current implementation
> >     does not implement read-ahead/caching then I'd be surprised
> >     if NFS is a lot slower. Nevertheless, you want to run a few
> >     tests...
> Can you say something about "writes"? I suppose the same, yes?
>
> The last question - I need to be the NFS server (not client). Can you
> recommend something for this?
>
> Till, thank you very much for your time.
> --
> Leon
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://www.rtems.org/pipermail/rtems-users/attachments/20090506/d1b41fdf/attachment-0001.html
>
> ------------------------------
>
> Message: 5
> Date: Wed, 6 May 2009 15:11:34 +0200
> From: Axel von Engel <a.vonengel at gmail.com>
> Subject: Code sharing in shared memory systems?
> To: rtems-users at rtems.org
> Message-ID:
>        <44e01eee0905060611t7d9a9110n6238ef12f9619776 at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hello everyone!
>
> I am developing a simulated multiprocessor system using MIPS cores,
> starting from a single processor system with RTEMS set up. Since the
> goal "simply" is to have a multiprocessor system running RTEMS with
> multiprocessing support (i.e. almost no design constraints), I
> currently look for options to achieve this.
>
> Shared memory seems to be the simplest way to communicate. In my
> current, very basic system the address space is shared, too. This
> leads to my question: Is it possible to run multiple RTEMS instances
> with the code being shared?
>
> My first impression is that it would be possible if I could find means
> to use different configuration tables depending on the current node
> ID. Any ideas on how this could be done? What else has to be
> separated?
>
> Regards
> Axel von Engel
>
>
> ------------------------------
>
> Message: 6
> Date: Wed, 6 May 2009 08:50:21 -0500
> From: Joel Sherrill <joel.sherrill at OARcorp.com>
> Subject: Re: Code sharing in shared memory systems?
> To: Axel von Engel <a.vonengel at gmail.com>
> Cc: "rtems-users at rtems.org" <rtems-users at rtems.org>
> Message-ID: <4A01959D.70606 at oarcorp.com>
> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
>
> Axel von Engel wrote:
> > Hello everyone!
> >
> > I am developing a simulated multiprocessor system using MIPS cores,
> > starting from a single processor system with RTEMS set up. Since the
> > goal "simply" is to have a multiprocessor system running RTEMS with
> > multiprocessing support (i.e. almost no design constraints), I
> > currently look for options to achieve this.
> >
> > Shared memory seems to be the simplest way to communicate. In my
> > current, very basic system the address space is shared, too. This
> > leads to my question: Is it possible to run multiple RTEMS instances
> > with the code being shared?
> >
> > My first impression is that it would be possible if I could find means
> > to use different configuration tables depending on the current node
> > ID. Any ideas on how this could be done? What else has to be
> > separated?
> >
> >
> The current mptests build the same source twice but they
> only differ in the node number.  That is used to determine
> which node does what.  That helps with the "same code image"
> issue but doesn't solve the problem of ensuring that
> each node has different addresses for .data, .bss, C program
> heap and workspace.
>
> For running them on psim, each CPU is an independent instance
> of a PowerPC with its own memory space.  A block of unix shared
> memory is mapped into each CPU's address space for shared
> communication.
>
> There are 4 core LEON3 systems and they do a slightly different
> setup if I remember right.  Link each node's code/data to a different
> area of memory and reserve part of the memory for communication
> between nodes.
> > Regards
> > Axel von Engel
> > _______________________________________________
> > rtems-users mailing list
> > rtems-users at rtems.org
> > http://www.rtems.org/mailman/listinfo/rtems-users
> >
>
>
> --
> Joel Sherrill, Ph.D.             Director of Research & Development
> joel.sherrill at OARcorp.com        On-Line Applications Research
> Ask me about RTEMS: a free RTOS  Huntsville AL 35805
>   Support Available             (256) 722-9985
>
>
>
>
> ------------------------------
>
> Message: 7
> Date: Wed, 6 May 2009 10:49:28 -0500
> From: Joel Sherrill <joel.sherrill at OARcorp.com>
> Subject: warning question
> To: "rtems-users at rtems.org" <rtems-users at rtems.org>
> Message-ID: <4A01B188.2070807 at oarcorp.com>
> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
>
> Hi,
>
> I am looking at warnings and wondered if someone
> could figured this one out.
>
> c/src/libchip/network/i82586.c:1722: warning: suggest parentheses around
> operand of '!' or change '|' to '||' or '!' to '~'
>
> The code is clearly questionable IMO. ;)
>
>  *IE_CMD_CFG_PROMISC(buf)   = !!promiscuous | manchester << 2;
>
> Hopefully someone can help out.
>
> --
> Joel Sherrill, Ph.D.             Director of Research & Development
> joel.sherrill at OARcorp.com        On-Line Applications Research
> Ask me about RTEMS: a free RTOS  Huntsville AL 35805
>   Support Available             (256) 722-9985
>
>
>
>
> ------------------------------
>
> _______________________________________________
> rtems-users mailing list
> rtems-users at rtems.org
> http://www.rtems.org/mailman/listinfo/rtems-users
>
>
> End of rtems-users Digest, Vol 32, Issue 6
> ******************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/users/attachments/20090507/31c9a345/attachment.html>


More information about the users mailing list