[Bug 1729] SMP Step #2
bugzilla-daemon at rtems.org
bugzilla-daemon at rtems.org
Wed Feb 2 20:19:34 UTC 2011
https://www.rtems.org/bugzilla/show_bug.cgi?id=1729
--- Comment #17 from Gedare <giddyup44 at yahoo.com> 2011-02-02 14:19:33 CST ---
(In reply to comment #16)
> (In reply to comment #15)
> > Some more detailed comments.
> >
> > * c/src/lib/libbsp/shared/smp/smp_stub.c
> > +#include <rtems/bspsmp.h>
> > +#include <stdlib.h>
> > +
> > +#include <rtems/bspsmp.h>
> > double include.
>
> Jennifer will fix.
>
> > * Why move sparc/cpu_irq.S into libbsp/sparc/shared?
>
> Because reading the cpu id is not defined as part of v7/v8 but by the leon3
> implementation.
>
> > * cpukit/score/cpu/sparc/cpu_asm.S: _CPU_Context_switch_to_first_task_smp
> >
> > The comment states it may not be safe to flush windows, but proceeds to issue a
> > window save. Is this considered safe, or should the window save be done after
> > setting up the WIM and before branching to done_flushing?
> >
Please comment on this.
> > * cpukit/score/include/rtems/bspsmp.h
> >
> > why are some functions prefaced with rtems_bsp_smp_xx, others with bsp_smp_xxx
> >
> > typos: will reschedule on this this CPU -- "this" repeated twice. Happens in
> > multiple lines in bspsmp.h
>
> Jennifer will fix.
>
> > void bsp_smp_broadcast_message(): should this function issue IPIs to EVERY cpu?
> > including the originating cpu?
>
> not including the originating (as you noticed) because you already
> know what happened. Jennifer add this to the Doxygen comments.
>
> > What is the difference between bsp_smp_broadcast_message() and
> > bsp_smp_interrupt_cpu()?
>
> I think you missed that there are 4 methods.
>
> + bsp_smp_send_message
> + bsp_smp_broadcast_message
> + bsp_smp_broadcast_interrupt
> + bsp_smp_interrupt_cpu
>
> You can send a message to one or all other CPUs. And interrupt
> one or all other CPUs.
>
> Jennifer.. the "XXX_message" ones are rtems_smp and are part of the Score SMP
> handler. So probably really should be _SMP_Send_message and
> _SMP_Broadcast_message. This means more content for
> score/include/rtems/score/smp.h
>
> The "XXX_interrupt" should be bsp_XXX to indic
>
> > Is it usual to have such an interface defined in the score that relies so
> > heavily on BSP functions?
>
> It is not particularly common but there are other cases.
>
> The precedences I can think of are the nanoseconds since last tick extension
> and MPCI BSP support. The clock tick extension was an indirect call but
> Sebastian made it a weak symbol. I want to get it to being a known name which
> the BSP has to provide. The MPCI stuff could eventually be a known name also.
>
> Known names which the BSP has to provide as easier to explain and document.
> This is why the BSP framework went this way.
>
> > Can we extend the existing uni-processor interfaces
> > to support the notion of cpu id? Are these functions implemented per-BSP, or
> > generically at the SCORE level? If generically, they should probably belong to
> > an SMP handler?
>
> This is how we are testing it now BUT as mentioned earlier, it adds overhead
> both in execution time and space. When the code is completely in, we can
> discuss it again but I think the SMP overhead will always be too much to want
> to treat a small single CPU system as a single core SMP system.
>
> There does need to be a Score SMP handler for other things though.
>
> > * cpukit/score/include/rtems/score/percpu.h
> >
> > + /** This element is used to lock this structure */
> > + SMP_lock_control Lock;
> > Why capital L in Lock? This is inconsistent with structure field names.
>
> Usually caps in structures are reserved for substructures. So this
> should be lower.
>
> > Do we know that uint32_t is the best choice of data size for IPI payloads
> > (state, message) and lock_control? Or should these be BSP-dependent types?
>
> No but we don't know it isn't. :)
>
> This could be overridden on a per target basis but not a per BSP basis if the
> need arises.
>
> > +/**
> > + * @brief Initialize SMP Handler
> > + *
> > + * This method initialize the SMP Handler.
> > + */
> > +void _SMP_Handler_initialize(void);
> > +
> >
> > Are there multiple handlers defined in this header file? Should there be a
> > separate SMP handler that instantiates all of these SMP-related files?
> >
> > It seems a lot of the #ifdef's can be eliminated by defining the uni-processor
> > RTEMS as an SMP system with 1 CPU having id=0. Then the same arrays can be used
> > and identical initializing and updating code. This simplify a lot of other
> > files, such as confdefs.h, threadcreateidle.c, and probably others.
>
> Yes but see above for overhead in time and space.
>
> Worth evaluating when this all works and we have hard numbers on time
> and space. Committing to that now is too early.
>
OK, good point.
> > * cpukit/score/include/rtems/score/smplock.h
> >
> > Should it be SMP_Lock_control, or SMP_lock_Control, or SMP_lock_control? And
> > previously mentioned the volatile keyword.
>
>
> SMP_lock_Control
>
> Class is SMP_lock. Element is Control.
>
> > * cpukit/score/src/smp.c
> > + /*
> > + * This is definitely a hack until we have SMP scheduling. Since there
> > + * is only one executing and heir right now, we have to fake this out.
> > + */
> > Can you identify what will go here when there is a proper scheduler?
>
> Not near as much as is hacked in now. LOL
>
> Disable context switching
> _CPU_Context_switch_to_first_task_smp( &heir->Registers );
>
>
> > + /*
> > + * HACK: Should not have to enable interrupts in real system here.
> > + * It should happen as part of switching to the first task.
> > + */
> > Will this be moved somewhere else then?
>
> Switching to the first task on each of the cores will enable interrupts.
> So we are just hacking now to prove we can get the cores to do something.
>
> > rtems_smp_process_interrupt():
> > + while(1)
> > + ;
> > + /* does not continue past here */
> > Is there a better way to do this, like a BSP-defined "halt"? This will make
> > sense for example on platforms with the ability to shutdown cores for reducing
> > power.
>
> Yes. That is certainly an option. Perhaps a bsp_shutdown_core() or something
> similar.
>
> > + printk( "switch needed\n" );
> > Debug statement?
>
> Yes. pretty much everything you are going through is a rough approximation of
> the initialization but since we don't have an smp scheduler, we don't really
> have threads allocated as heir to each of the cores.
>
> > + if ( cpu == dest_cpu )
> > + continue;
> > I guess this answers the question about broadcasting to the source. This should
> > be clearly documented.
>
> Agreed.
>
> > + bsp_smp_broadcast_interrupt();
> > Might be better to notify while sending the message? Acquiring locks can be
> > quite expensive.
>
> But we have to send the message to all CPUs before we tell them it is
> there.
>
> Are you suggesting we ignore a broadcast interrupt capability
> and send an individual interrupt while sending each message?
>
No, I just wasn't clear on why the message and interrupt were sent separately.
The ability to broadcast the interrupt but individually send the messages does
make sense.
> > * cpukit/score/src/smplock.c
> > This implements a spin lock. There are many types of locks (e.g. R/W locks,
> > RCU, and so on). The naming should be more precise and forward compatible in
> > case additional locks are implemented.
>
> SMP Spin Lock? Or do you have something in mind?
>
If SMP_lock is the class, then I could see something like SMP_lock_Spinlock_Xxx
(e.g. SMP_lock_Spinlock_Obtain), or SMP_lock_Spinlock_xxx
(SMP_lock_Spinlock_obtain). Maybe it doesn't matter at this point, but I think
clarity is better here. It will make code easier to read in case more locks
than spinlock are introduced.
> > * cpukit/score/src/thread.c:
> > -/*PAGE
> > - *
> > ...
> > -#if defined(RTEMS_MULTIPROCESSING)
> > - _Thread_MP_Handler_initialization( maximum_proxies );
> > -#endif
> > + #if defined(RTEMS_MULTIPROCESSING)
> > + _Thread_MP_Handler_initialization( maximum_proxies );
> > + #endif
> > These syntactical fixes are distracting. Just a nit for the future.
>
> Jennifer .. I can comment some of these separately. I thought most of them
> were out.
--
Configure bugmail: https://www.rtems.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are watching all bug changes.
More information about the bugs
mailing list