<p dir="ltr"><br>
On Jul 4, 2014 10:09 AM, Daniel Cederman <cederman@gaisler.com> wrote:<br>
><br>
> > This limits the API to the default cpu_set_t. Other routines like<br>
> > pthread_setaffinity_np() don't have this limitation.<br>
><br>
> I looked at pthread_setaffinity_np() and got a bit confused. I see that <br>
> it takes both a pointer to a cpu_set_t and the size of the cpu set. It <br>
> forwards it to _Scheduler_Set_affinity which requires <br>
> __RTEMS_HAVE_SYS_CPUSET_H__ to be defined. If this is defined then <br>
> _CPU_set_Handler_initialization checks that the number of cpus is less <br>
> than CPU_SETSIZE. In newlib this is defined to 32 and is used to size <br>
> the cpu_set_t struct. So why is the size needed for <br>
> pthread_setaffinity_np if the number of cpus cannot exceed a hardcoded <br>
> constant? I recall that there was a discussion on the list about cpu <br>
> sets, but I'm not finding anything searching.</p>
<p dir="ltr">This keeps the API compatible with Linux and *BSD even though we don't support more than 32 yet. So planning ahead.</p>
<p dir="ltr">> Daniel Cederman<br>
> Software Engineer<br>
> Aeroflex Gaisler AB<br>
> Aeroflex Microelectronic Solutions – HiRel<br>
> Kungsgatan 12<br>
> SE-411 19 Gothenburg, Sweden<br>
> Phone: +46 31 7758665<br>
> cederman@gaisler.com<br>
> www.Aeroflex.com/Gaisler<br>
><br>
> On 2014-07-04 08:38, Sebastian Huber wrote:<br>
> > On 2014-07-03 11:37, Daniel Cederman wrote:<br>
> >> Adds functions that allows the user to specify which cores that should<br>
> >> perform the cache operation. SMP messages are sent to all the specified<br>
> >> cores and the caller waits until all cores have acknowledged that they<br>
> >> have flushed their cache. Implementation is shown using both function<br>
> >> pointers and function ids together with a switch statement. One needs<br>
> >> to be dropped before committing. Any preference?<br>
> >> ---<br>
> >> c/src/lib/libcpu/shared/src/cache_manager.c | 166<br>
> >> +++++++++++++++++++++++++++<br>
> >> cpukit/rtems/include/rtems/rtems/cache.h | 84 ++++++++++++++<br>
> >> cpukit/score/include/rtems/score/smpimpl.h | 13 +++<br>
> >> 3 files changed, 263 insertions(+)<br>
> >><br>
> >> diff --git a/c/src/lib/libcpu/shared/src/cache_manager.c<br>
> >> b/c/src/lib/libcpu/shared/src/cache_manager.c<br>
> >> index 420a013..91e9f72 100644<br>
> >> --- a/c/src/lib/libcpu/shared/src/cache_manager.c<br>
> >> +++ b/c/src/lib/libcpu/shared/src/cache_manager.c<br>
> >> @@ -37,6 +37,172 @@<br>
> >><br>
> >> #include <rtems.h><br>
> >> #include "cache_.h"<br>
> >> +#include <rtems/score/smplock.h><br>
> >> +#include <rtems/score/smpimpl.h><br>
> >> +<br>
> >> +#if defined( RTEMS_SMP )<br>
> >> +<br>
> >> +typedef void (*cache_manager_func_t)(const void * d_addr, size_t<br>
> >> n_bytes);<br>
> ><br>
> > The *_t namespace is reserved by POSIX. The names don't follow the<br>
> > score naming conventions.<br>
> ><br>
> > http://www.rtems.org/wiki/index.php/Naming_rules<br>
> ><br>
> >> +<br>
> >> +typedef enum {<br>
> >> + FLUSH_MULTIPLE_DATA_LINES,<br>
> >> + INVALIDATE_MULTIPLE_DATA_LINES,<br>
> >> + INVALIDATE_MULTIPLE_INSTRUCTION_LINES,<br>
> >> + FLUSH_ENTIRE_DATA,<br>
> >> + INVALIDATE_ENTIRE_INSTRUCTION,<br>
> >> + INVALIDATE_ENTIRE_DATA<br>
> >> +} cache_manager_func_id_t;<br>
> >> +<br>
> >> +typedef struct {<br>
> >> + SMP_lock_Control lock;<br>
> >> + cache_manager_func_t func;<br>
> >> + cache_manager_func_id_t func_id;<br>
> >> + Atomic_Uint count;<br>
> >> + const void *addr;<br>
> >> + size_t size;<br>
> >> +} Cache_Manager_SMP;<br>
> >> +<br>
> >> +static Cache_Manager_SMP _CM_SMP = {<br>
> >> + .lock = SMP_LOCK_INITIALIZER("CacheMgr"),<br>
> >> + .count = CPU_ATOMIC_INITIALIZER_UINT(0)<br>
> >> +};<br>
> >> +<br>
> >> +void<br>
> >> +_SMP_Cache_manager_ipi_handler(void)<br>
> ><br>
> > I would rather name this _SMP_Cache_manager_message_handler().<br>
> ><br>
> >> +{<br>
> >> +#ifdef USE_FUNCPTR<br>
> >> + _CM_SMP.func( _CM_SMP.addr, _CM_SMP.size );<br>
> >> +#else<br>
> >> + switch( _CM_SMP.func_id ) {<br>
> >> + case FLUSH_MULTIPLE_DATA_LINES:<br>
> >> + rtems_cache_flush_multiple_data_lines(<br>
> >> + _CM_SMP.addr, _CM_SMP.size );<br>
> >> + break;<br>
> >> + case INVALIDATE_MULTIPLE_DATA_LINES:<br>
> >> + rtems_cache_invalidate_multiple_data_lines(<br>
> >> + _CM_SMP.addr, _CM_SMP.size );<br>
> >> + break;<br>
> >> + case INVALIDATE_MULTIPLE_INSTRUCTION_LINES:<br>
> >> + rtems_cache_invalidate_multiple_instruction_lines(<br>
> >> + _CM_SMP.addr, _CM_SMP.size );<br>
> >> + break;<br>
> >> + case FLUSH_ENTIRE_DATA:<br>
> >> + rtems_cache_flush_entire_data();<br>
> >> + break;<br>
> >> + case INVALIDATE_ENTIRE_INSTRUCTION:<br>
> >> + rtems_cache_invalidate_entire_instruction();<br>
> >> + break;<br>
> >> + case INVALIDATE_ENTIRE_DATA:<br>
> >> + rtems_cache_invalidate_entire_data();<br>
> >> + break;<br>
> >> + default:<br>
> >> + _Assert( 0 );<br>
> >> + break;<br>
> >> + }<br>
> >> +#endif<br>
> >> +<br>
> >> + _Atomic_Fetch_sub_uint( &_CM_SMP.count, 1, ATOMIC_ORDER_RELEASE );<br>
> >> +}<br>
> >> +<br>
> >> +static void<br>
> >> +_cache_manager_send_ipi_msg( const cpu_set_t *set,<br>
> >> cache_manager_func_t func,<br>
> >> + cache_manager_func_id_t func_id, const void * addr, size_t size )<br>
> >> +{<br>
> >> + uint32_t i;<br>
> >> + uint32_t set_size = 0;<br>
> >> + SMP_lock_Context lock_context;<br>
> >> +<br>
> >> + _Assert( _System_state_Is_up( _System_state_Get() ) );<br>
> >> +<br>
> >> + for( i=0; i < _SMP_Get_processor_count(); ++i ) {<br>
> >> + set_size += CPU_ISSET(i, set);<br>
> >> + }<br>
> >> +<br>
> >> + _SMP_lock_Acquire( &_CM_SMP.lock, &lock_context );<br>
> ><br>
> > With this implementation cache routines must not be called from<br>
> > interrupt context. This should be mentioned in the documentation.<br>
> ><br>
> > It is extremely difficult to implement it in a way so that it can be<br>
> > used from interrupt context.<br>
> ><br>
> >> +<br>
> >> + _CM_SMP.func = func;<br>
> >> + _CM_SMP.func_id = func_id;<br>
> >> + _CM_SMP.addr = addr;<br>
> >> + _CM_SMP.size = size;<br>
> >> + _Atomic_Store_uint( &_CM_SMP.count, set_size, ATOMIC_ORDER_RELEASE );<br>
> >> + _Atomic_Fence( ATOMIC_ORDER_RELEASE );<br>
> >> +<br>
> >> + _SMP_Send_message_cpu_set( set, SMP_MESSAGE_CACHE_MANAGER );<br>
> >> +<br>
> >> + while(_Atomic_Load_uint( &_CM_SMP.count, ATOMIC_ORDER_ACQUIRE ) !=<br>
> >> 0 );<br>
> >> +<br>
> >> + _SMP_lock_Release( &_CM_SMP.lock, &lock_context );<br>
> >> +}<br>
> >> +<br>
> >> +void<br>
> >> +rtems_cache_invalidate_entire_instruction_cpu_set ( const cpu_set_t<br>
> >> *set )<br>
> ><br>
> > This limits the API to the default cpu_set_t. Other routines like<br>
> > pthread_setaffinity_np() don't have this limitation.<br>
> ><br>
> > On other places we don't use "cpu" instead we use "processor" it should<br>
> > be consistent in the high level RTEMS APIs.<br>
> ><br>
> > [...]<br>
> ><br>
> _______________________________________________<br>
> devel mailing list<br>
> devel@rtems.org<br>
> http://lists.rtems.org/mailman/listinfo/devel<br>
</p>