[PATCH] score: Optimize scheduler priority updates
Joel Sherrill
joel at rtems.org
Fri Nov 17 14:48:26 UTC 2017
What architecture were those sizes on? Do the size changes hold on
other architectures?
Good spot if it is consistent.
--joel
On Thu, Nov 16, 2017 at 11:57 PM, Sebastian Huber <
sebastian.huber at embedded-brains.de> wrote:
> Thread priority changes may append or prepend the thread to its priority
> group on the scheduler ready queue. Previously, a separate priority
> value and a prepend-it flag in the scheduler node were used to propagate
> a priority change to the scheduler.
>
> Now, use an append-it bit in the priority control and reduce the plain
> priority value to 63 bits.
>
> This change leads to a significant code size reduction (about 25%) of
> the SMP schedulers. The negligible increase of the standard priority
> scheduler is due to some additional shift operations
> (SCHEDULER_PRIORITY_MAP() and SCHEDULER_PRIORITY_UNMAP()).
>
> Before:
>
> text filename
> 136 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulersimpleblock.o
> 464 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulersimplechangepriority.o
> 24 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-schedulersimple.o
> 108 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulersimpleschedule.o
> 292 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulersimpleunblock.o
> 264 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulersimpleyield.o
>
> text filename
> 280 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulerpriorityblock.o
> 488 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulerprioritychangepriority.o
> 200 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulerpriority.o
> 164 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulerpriorityschedule.o
> 328 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulerpriorityunblock.o
> 200 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulerpriorityyield.o
>
> text filename
> 24112 arm-rtems5/c/imx7/cpukit/score/src/libscore_a-scheduleredfsmp.o
>
> text filename
> 37204 sparc-rtems5/c/gr740/cpukit/score/src/libscore_a-scheduleredfsmp.o
>
> text filename
> 42236 powerpc-rtems5/c/qoriq_e6500_32/cpukit/score/src/libscore_
> a-scheduleredfsmp.o
>
> After:
>
> text filename
> 136 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulersimpleblock.o
> 272 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulersimplechangepriority.o
> 24 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-schedulersimple.o
> 108 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulersimpleschedule.o
> 292 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulersimpleunblock.o
> 264 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulersimpleyield.o
>
> text filename
> 280 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulerpriorityblock.o
> 488 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulerprioritychangepriority.o
> 208 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulerpriority.o
> 164 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulerpriorityschedule.o
> 332 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulerpriorityunblock.o
> 200 sparc-rtems5/c/erc32/cpukit/score/src/libscore_a-
> schedulerpriorityyield.o
>
> text filename
> 18860 arm-rtems5/c/imx7/cpukit/score/src/libscore_a-scheduleredfsmp.o
>
> text filename
> 28520 sparc-rtems5/c/gr740/cpukit/score/src/libscore_a-scheduleredfsmp.o
>
> text filename
> 32664 powerpc-rtems5/c/qoriq_e6500_32/cpukit/score/src/libscore_
> a-scheduleredfsmp.o
> ---
> cpukit/score/include/rtems/score/priority.h | 16 +-
> cpukit/score/include/rtems/score/scheduler.h | 21 ++-
> .../score/include/rtems/score/scheduleredfimpl.h | 22 +--
> cpukit/score/include/rtems/score/schedulerimpl.h | 30 ++++
> cpukit/score/include/rtems/score/schedulernode.h | 13 +-
> .../score/include/rtems/score/schedulernodeimpl.h | 18 +-
> .../include/rtems/score/schedulerpriorityimpl.h | 6 +-
> .../include/rtems/score/schedulerprioritysmpimpl.h | 75 ++++----
> .../include/rtems/score/schedulersimpleimpl.h | 52 +-----
> .../score/include/rtems/score/schedulersmpimpl.h | 191
> +++++++++------------
> cpukit/score/src/schedulercbsunblock.c | 5 +-
> cpukit/score/src/schedulerdefaultmappriority.c | 14 +-
> cpukit/score/src/scheduleredfchangepriority.c | 13 +-
> cpukit/score/src/scheduleredfreleasejob.c | 4 +-
> cpukit/score/src/scheduleredfsmp.c | 184
> +++++---------------
> cpukit/score/src/scheduleredfunblock.c | 9 +-
> cpukit/score/src/schedulerpriority.c | 3 +-
> cpukit/score/src/schedulerpriorityaffinitysmp.c | 150 +++++-----------
> cpukit/score/src/schedulerprioritychangepriority.c | 19 +-
> cpukit/score/src/schedulerprioritysmp.c | 116 +++----------
> cpukit/score/src/schedulerpriorityunblock.c | 11 +-
> cpukit/score/src/schedulersimplechangepriority.c | 12 +-
> cpukit/score/src/schedulersimplesmp.c | 161
> +++++------------
> cpukit/score/src/schedulersimpleunblock.c | 6 +-
> cpukit/score/src/schedulersimpleyield.c | 10 +-
> cpukit/score/src/schedulerstrongapa.c | 176
> ++++++-------------
> .../smptests/smpscheduler01/smpscheduler01.doc | 2 +-
> testsuites/sptests/spintrcritical23/init.c | 15 +-
> 28 files changed, 485 insertions(+), 869 deletions(-)
>
> diff --git a/cpukit/score/include/rtems/score/priority.h
> b/cpukit/score/include/rtems/score/priority.h
> index 9cc6338288..7a8ddba763 100644
> --- a/cpukit/score/include/rtems/score/priority.h
> +++ b/cpukit/score/include/rtems/score/priority.h
> @@ -8,7 +8,7 @@
> * COPYRIGHT (c) 1989-2011.
> * On-Line Applications Research Corporation (OAR).
> *
> - * Copyright (c) 2016 embedded brains GmbH.
> + * Copyright (c) 2016, 2017 embedded brains GmbH.
> *
> * The license and distribution terms for this file may be
> * found in the file LICENSE in this distribution or at
> @@ -45,11 +45,23 @@ extern "C" {
> */
>
> /**
> - * @brief A plain thread priority value.
> + * @brief The thread priority control.
> *
> * Lower values represent higher priorities. So, a priority value of zero
> * represents the highest priority thread. This value is reserved for
> internal
> * threads and the priority ceiling protocol.
> + *
> + * The format of the thread priority control depends on the context. A
> thread
> + * priority control may contain a user visible priority for API
> import/export.
> + * It may also contain a scheduler internal priority value. Values are
> + * translated via the scheduler map/unmap priority operations. The
> format of
> + * scheduler interal values depend on the particular scheduler
> implementation.
> + * It may for example encode a deadline in case of the EDF scheduler.
> + *
> + * The thread priority control value contained in the scheduler node
> + * (Scheduler_Node::Priority::value) uses the least-significant bit to
> indicate
> + * if the thread should be appended or prepended to its priority group,
> see
> + * SCHEDULER_PRIORITY_APPEND().
> */
> typedef uint64_t Priority_Control;
>
> diff --git a/cpukit/score/include/rtems/score/scheduler.h
> b/cpukit/score/include/rtems/score/scheduler.h
> index 669f82c48c..a6066c8e4a 100644
> --- a/cpukit/score/include/rtems/score/scheduler.h
> +++ b/cpukit/score/include/rtems/score/scheduler.h
> @@ -334,19 +334,32 @@ extern const Scheduler_Control _Scheduler_Table[];
> #endif
>
> /**
> - * @brief Returns the thread priority.
> + * @brief Returns the scheduler internal thread priority mapped by
> + * SCHEDULER_PRIORITY_MAP().
> *
> * @param[in] scheduler Unused.
> - * @param[in] priority The thread priority.
> + * @param[in] priority The user visible thread priority.
> *
> - * @return priority The thread priority.
> + * @return priority The scheduler internal thread priority.
> */
> Priority_Control _Scheduler_default_Map_priority(
> const Scheduler_Control *scheduler,
> Priority_Control priority
> );
>
> -#define _Scheduler_default_Unmap_priority _Scheduler_default_Map_priority
> +/**
> + * @brief Returns the user visible thread priority unmapped by
> + * SCHEDULER_PRIORITY_UNMAP().
> + *
> + * @param[in] scheduler Unused.
> + * @param[in] priority The scheduler internal thread priority.
> + *
> + * @return priority The user visible thread priority.
> + */
> +Priority_Control _Scheduler_default_Unmap_priority(
> + const Scheduler_Control *scheduler,
> + Priority_Control priority
> +);
>
> #if defined(RTEMS_SMP)
> /**
> diff --git a/cpukit/score/include/rtems/score/scheduleredfimpl.h
> b/cpukit/score/include/rtems/score/scheduleredfimpl.h
> index 94a78fcff5..f6bd7d8384 100644
> --- a/cpukit/score/include/rtems/score/scheduleredfimpl.h
> +++ b/cpukit/score/include/rtems/score/scheduleredfimpl.h
> @@ -79,7 +79,7 @@ RTEMS_INLINE_ROUTINE bool _Scheduler_EDF_Less(
> return prio_left < prio_right;
> }
>
> -RTEMS_INLINE_ROUTINE bool _Scheduler_EDF_Less_or_equal(
> +RTEMS_INLINE_ROUTINE bool _Scheduler_EDF_Priority_less_equal(
> const void *left,
> const RBTree_Node *right
> )
> @@ -101,28 +101,14 @@ RTEMS_INLINE_ROUTINE bool
> _Scheduler_EDF_Less_or_equal(
> RTEMS_INLINE_ROUTINE void _Scheduler_EDF_Enqueue(
> Scheduler_EDF_Context *context,
> Scheduler_EDF_Node *node,
> - Priority_Control priority
> + Priority_Control insert_priority
> )
> {
> _RBTree_Insert_inline(
> &context->Ready,
> &node->Node,
> - &priority,
> - _Scheduler_EDF_Less
> - );
> -}
> -
> -RTEMS_INLINE_ROUTINE void _Scheduler_EDF_Enqueue_first(
> - Scheduler_EDF_Context *context,
> - Scheduler_EDF_Node *node,
> - Priority_Control priority
> -)
> -{
> - _RBTree_Insert_inline(
> - &context->Ready,
> - &node->Node,
> - &priority,
> - _Scheduler_EDF_Less_or_equal
> + &insert_priority,
> + _Scheduler_EDF_Priority_less_equal
> );
> }
>
> diff --git a/cpukit/score/include/rtems/score/schedulerimpl.h
> b/cpukit/score/include/rtems/score/schedulerimpl.h
> index ba04ec9492..10c12242a9 100644
> --- a/cpukit/score/include/rtems/score/schedulerimpl.h
> +++ b/cpukit/score/include/rtems/score/schedulerimpl.h
> @@ -37,6 +37,36 @@ extern "C" {
> /**@{**/
>
> /**
> + * @brief Maps a priority value to support the append indicator.
> + */
> +#define SCHEDULER_PRIORITY_MAP( priority ) ( ( priority ) << 1 )
> +
> +/**
> + * @brief Returns the plain priority value.
> + */
> +#define SCHEDULER_PRIORITY_UNMAP( priority ) ( ( priority ) >> 1 )
> +
> +/**
> + * @brief Clears the priority append indicator bit.
> + */
> +#define SCHEDULER_PRIORITY_PURIFY( priority ) \
> + ( ( priority ) & ~( (Priority_Control) SCHEDULER_PRIORITY_APPEND_FLAG )
> )
> +
> +/**
> + * @brief Returns the priority control with the append indicator bit set.
> + */
> +#define SCHEDULER_PRIORITY_APPEND( priority ) \
> + ( ( priority ) | SCHEDULER_PRIORITY_APPEND_FLAG )
> +
> +/**
> + * @brief Returns true, if the item should be appended to its priority
> group,
> + * otherwise returns false and the item should be prepended to its
> priority
> + * group.
> + */
> +#define SCHEDULER_PRIORITY_IS_APPEND( priority ) \
> + ( ( ( priority ) & SCHEDULER_PRIORITY_APPEND_FLAG ) != 0 )
> +
> +/**
> * @brief Initializes the scheduler to the policy chosen by the user.
> *
> * This routine initializes the scheduler to the policy chosen by the
> user
> diff --git a/cpukit/score/include/rtems/score/schedulernode.h
> b/cpukit/score/include/rtems/score/schedulernode.h
> index 1474b0c13c..d62e983853 100644
> --- a/cpukit/score/include/rtems/score/schedulernode.h
> +++ b/cpukit/score/include/rtems/score/schedulernode.h
> @@ -175,6 +175,12 @@ struct Scheduler_Node {
> *
> * The producer of this value is _Thread_Change_priority(). The
> consumer
> * is the scheduler via the unblock and update priority operations.
> + *
> + * This priority control consists of two parts. One part is the plain
> + * priority value (most-significant 63 bits). The other part is the
> + * least-significant bit which indicates if the thread should be
> appended
> + * (bit set) or prepended (bit cleared) to its priority group, see
> + * SCHEDULER_PRIORITY_APPEND().
> */
> Priority_Control value;
>
> @@ -184,13 +190,6 @@ struct Scheduler_Node {
> */
> SMP_sequence_lock_Control Lock;
> #endif
> -
> - /**
> - * @brief In case a priority update is necessary and this is true,
> then
> - * enqueue the thread as the first of its priority group, otherwise
> enqueue
> - * the thread as the last of its priority group.
> - */
> - bool prepend_it;
> } Priority;
> };
>
> diff --git a/cpukit/score/include/rtems/score/schedulernodeimpl.h
> b/cpukit/score/include/rtems/score/schedulernodeimpl.h
> index 9ac0334979..8997b3f218 100644
> --- a/cpukit/score/include/rtems/score/schedulernodeimpl.h
> +++ b/cpukit/score/include/rtems/score/schedulernodeimpl.h
> @@ -1,5 +1,5 @@
> /*
> - * Copyright (c) 2014, 2016 embedded brains GmbH. All rights reserved.
> + * Copyright (c) 2014, 2017 embedded brains GmbH. All rights reserved.
> *
> * embedded brains GmbH
> * Dornierstr. 4
> @@ -30,6 +30,12 @@ extern "C" {
> #define SCHEDULER_NODE_OF_WAIT_PRIORITY( node ) \
> RTEMS_CONTAINER_OF( node, Scheduler_Node, Wait.Priority )
>
> +/**
> + * @brief Priority append indicator for the priority control used for the
> + * scheduler node priority.
> + */
> +#define SCHEDULER_PRIORITY_APPEND_FLAG 1
> +
> RTEMS_INLINE_ROUTINE void _Scheduler_Node_do_initialize(
> const struct _Scheduler_Control *scheduler,
> Scheduler_Node *node,
> @@ -40,7 +46,6 @@ RTEMS_INLINE_ROUTINE void _Scheduler_Node_do_initialize(
> node->owner = the_thread;
>
> node->Priority.value = priority;
> - node->Priority.prepend_it = false;
>
> #if defined(RTEMS_SMP)
> _Chain_Initialize_node( &node->Thread.Wait_node );
> @@ -69,12 +74,10 @@ RTEMS_INLINE_ROUTINE Thread_Control
> *_Scheduler_Node_get_owner(
> }
>
> RTEMS_INLINE_ROUTINE Priority_Control _Scheduler_Node_get_priority(
> - Scheduler_Node *node,
> - bool *prepend_it_p
> + Scheduler_Node *node
> )
> {
> Priority_Control priority;
> - bool prepend_it;
>
> #if defined(RTEMS_SMP)
> unsigned int seq;
> @@ -84,14 +87,11 @@ RTEMS_INLINE_ROUTINE Priority_Control
> _Scheduler_Node_get_priority(
> #endif
>
> priority = node->Priority.value;
> - prepend_it = node->Priority.prepend_it;
>
> #if defined(RTEMS_SMP)
> } while ( _SMP_sequence_lock_Read_retry( &node->Priority.Lock, seq ) );
> #endif
>
> - *prepend_it_p = prepend_it;
> -
> return priority;
> }
>
> @@ -107,8 +107,8 @@ RTEMS_INLINE_ROUTINE void _Scheduler_Node_set_priority(
> seq = _SMP_sequence_lock_Write_begin( &node->Priority.Lock );
> #endif
>
> + new_priority |= ( prepend_it ? 0 : SCHEDULER_PRIORITY_APPEND_FLAG );
> node->Priority.value = new_priority;
> - node->Priority.prepend_it = prepend_it;
>
> #if defined(RTEMS_SMP)
> _SMP_sequence_lock_Write_end( &node->Priority.Lock, seq );
> diff --git a/cpukit/score/include/rtems/score/schedulerpriorityimpl.h
> b/cpukit/score/include/rtems/score/schedulerpriorityimpl.h
> index 68c61c16f3..354065fac4 100644
> --- a/cpukit/score/include/rtems/score/schedulerpriorityimpl.h
> +++ b/cpukit/score/include/rtems/score/schedulerpriorityimpl.h
> @@ -216,18 +216,18 @@ RTEMS_INLINE_ROUTINE void
> _Scheduler_priority_Schedule_body(
> */
> RTEMS_INLINE_ROUTINE void _Scheduler_priority_Ready_queue_update(
> Scheduler_priority_Ready_queue *ready_queue,
> - Priority_Control new_priority,
> + unsigned int new_priority,
> Priority_bit_map_Control *bit_map,
> Chain_Control *ready_queues
> )
> {
> - ready_queue->current_priority = (unsigned int) new_priority;
> + ready_queue->current_priority = new_priority;
> ready_queue->ready_chain = &ready_queues[ new_priority ];
>
> _Priority_bit_map_Initialize_information(
> bit_map,
> &ready_queue->Priority_map,
> - (unsigned int) new_priority
> + new_priority
> );
> }
>
> diff --git a/cpukit/score/include/rtems/score/schedulerprioritysmpimpl.h
> b/cpukit/score/include/rtems/score/schedulerprioritysmpimpl.h
> index 073a7ade06..17d6e552f3 100644
> --- a/cpukit/score/include/rtems/score/schedulerprioritysmpimpl.h
> +++ b/cpukit/score/include/rtems/score/schedulerprioritysmpimpl.h
> @@ -7,7 +7,7 @@
> */
>
> /*
> - * Copyright (c) 2013-2014 embedded brains GmbH. All rights reserved.
> + * Copyright (c) 2013, 2017 embedded brains GmbH. All rights reserved.
> *
> * embedded brains GmbH
> * Dornierstr. 4
> @@ -90,7 +90,7 @@ static inline void _Scheduler_priority_SMP_Move_
> from_ready_to_scheduled(
> {
> Scheduler_priority_SMP_Context *self;
> Scheduler_priority_SMP_Node *node;
> - Priority_Control priority;
> + Priority_Control insert_priority;
>
> self = _Scheduler_priority_SMP_Get_self( context );
> node = _Scheduler_priority_SMP_Node_downcast( ready_to_scheduled );
> @@ -100,47 +100,41 @@ static inline void _Scheduler_priority_SMP_Move_
> from_ready_to_scheduled(
> &node->Ready_queue,
> &self->Bit_map
> );
> - priority = node->Base.priority;
> + insert_priority = _Scheduler_SMP_Node_priority( &node->Base.Base );
> + insert_priority = SCHEDULER_PRIORITY_APPEND( insert_priority );
> _Chain_Insert_ordered_unprotected(
> &self->Base.Scheduled,
> &node->Base.Base.Node.Chain,
> - &priority,
> - _Scheduler_SMP_Insert_priority_fifo_order
> + &insert_priority,
> + _Scheduler_SMP_Priority_less_equal
> );
> }
>
> -static inline void _Scheduler_priority_SMP_Insert_ready_lifo(
> +static inline void _Scheduler_priority_SMP_Insert_ready(
> Scheduler_Context *context,
> - Scheduler_Node *thread
> -)
> -{
> - Scheduler_priority_SMP_Context *self =
> - _Scheduler_priority_SMP_Get_self( context );
> - Scheduler_priority_SMP_Node *node =
> - _Scheduler_priority_SMP_Node_downcast( thread );
> -
> - _Scheduler_priority_Ready_queue_enqueue(
> - &node->Base.Base.Node.Chain,
> - &node->Ready_queue,
> - &self->Bit_map
> - );
> -}
> -
> -static inline void _Scheduler_priority_SMP_Insert_ready_fifo(
> - Scheduler_Context *context,
> - Scheduler_Node *thread
> + Scheduler_Node *node_base,
> + Priority_Control insert_priority
> )
> {
> - Scheduler_priority_SMP_Context *self =
> - _Scheduler_priority_SMP_Get_self( context );
> - Scheduler_priority_SMP_Node *node =
> - _Scheduler_priority_SMP_Node_downcast( thread );
> + Scheduler_priority_SMP_Context *self;
> + Scheduler_priority_SMP_Node *node;
>
> - _Scheduler_priority_Ready_queue_enqueue_first(
> - &node->Base.Base.Node.Chain,
> - &node->Ready_queue,
> - &self->Bit_map
> - );
> + self = _Scheduler_priority_SMP_Get_self( context );
> + node = _Scheduler_priority_SMP_Node_downcast( node_base );
> +
> + if ( SCHEDULER_PRIORITY_IS_APPEND( insert_priority ) ) {
> + _Scheduler_priority_Ready_queue_enqueue(
> + &node->Base.Base.Node.Chain,
> + &node->Ready_queue,
> + &self->Bit_map
> + );
> + } else {
> + _Scheduler_priority_Ready_queue_enqueue_first(
> + &node->Base.Base.Node.Chain,
> + &node->Ready_queue,
> + &self->Bit_map
> + );
> + }
> }
>
> static inline void _Scheduler_priority_SMP_Extract_from_ready(
> @@ -162,19 +156,20 @@ static inline void _Scheduler_priority_SMP_
> Extract_from_ready(
>
> static inline void _Scheduler_priority_SMP_Do_update(
> Scheduler_Context *context,
> - Scheduler_Node *node_to_update,
> - Priority_Control new_priority
> + Scheduler_Node *node_to_update,
> + Priority_Control new_priority
> )
> {
> - Scheduler_priority_SMP_Context *self =
> - _Scheduler_priority_SMP_Get_self( context );
> - Scheduler_priority_SMP_Node *node =
> - _Scheduler_priority_SMP_Node_downcast( node_to_update );
> + Scheduler_priority_SMP_Context *self;
> + Scheduler_priority_SMP_Node *node;
> +
> + self = _Scheduler_priority_SMP_Get_self( context );
> + node = _Scheduler_priority_SMP_Node_downcast( node_to_update );
>
> _Scheduler_SMP_Node_update_priority( &node->Base, new_priority );
> _Scheduler_priority_Ready_queue_update(
> &node->Ready_queue,
> - new_priority,
> + SCHEDULER_PRIORITY_UNMAP( new_priority ),
> &self->Bit_map,
> &self->Ready[ 0 ]
> );
> diff --git a/cpukit/score/include/rtems/score/schedulersimpleimpl.h
> b/cpukit/score/include/rtems/score/schedulersimpleimpl.h
> index ec74cdc586..3891839281 100644
> --- a/cpukit/score/include/rtems/score/schedulersimpleimpl.h
> +++ b/cpukit/score/include/rtems/score/schedulersimpleimpl.h
> @@ -38,65 +38,31 @@ RTEMS_INLINE_ROUTINE Scheduler_simple_Context *
> return (Scheduler_simple_Context *) _Scheduler_Get_context( scheduler );
> }
>
> -RTEMS_INLINE_ROUTINE bool _Scheduler_simple_Insert_priority_lifo_order(
> +RTEMS_INLINE_ROUTINE bool _Scheduler_simple_Priority_less_equal(
> const void *to_insert,
> const Chain_Node *next
> )
> {
> - const Priority_Control *priority_to_insert;
> - const Thread_Control *thread_next;
> + const unsigned int *priority_to_insert;
> + const Thread_Control *thread_next;
>
> - priority_to_insert = (const Priority_Control *) to_insert;
> + priority_to_insert = (const unsigned int *) to_insert;
> thread_next = (const Thread_Control *) next;
>
> return *priority_to_insert <= _Thread_Get_priority( thread_next );
> }
>
> -RTEMS_INLINE_ROUTINE bool _Scheduler_simple_Insert_priority_fifo_order(
> - const void *to_insert,
> - const Chain_Node *next
> -)
> -{
> - const Priority_Control *priority_to_insert;
> - const Thread_Control *thread_next;
> -
> - priority_to_insert = (const Priority_Control *) to_insert;
> - thread_next = (const Thread_Control *) next;
> -
> - return *priority_to_insert < _Thread_Get_priority( thread_next );
> -}
> -
> -RTEMS_INLINE_ROUTINE void _Scheduler_simple_Insert_priority_lifo(
> - Chain_Control *chain,
> - Thread_Control *to_insert
> -)
> -{
> - Priority_Control priority_to_insert;
> -
> - priority_to_insert = _Thread_Get_priority( to_insert );
> -
> - _Chain_Insert_ordered_unprotected(
> - chain,
> - &to_insert->Object.Node,
> - &priority_to_insert,
> - _Scheduler_simple_Insert_priority_lifo_order
> - );
> -}
> -
> -RTEMS_INLINE_ROUTINE void _Scheduler_simple_Insert_priority_fifo(
> +RTEMS_INLINE_ROUTINE void _Scheduler_simple_Insert(
> Chain_Control *chain,
> - Thread_Control *to_insert
> + Thread_Control *to_insert,
> + unsigned int insert_priority
> )
> {
> - Priority_Control priority_to_insert;
> -
> - priority_to_insert = _Thread_Get_priority( to_insert );
> -
> _Chain_Insert_ordered_unprotected(
> chain,
> &to_insert->Object.Node,
> - &priority_to_insert,
> - _Scheduler_simple_Insert_priority_fifo_order
> + &insert_priority,
> + _Scheduler_simple_Priority_less_equal
> );
> }
>
> diff --git a/cpukit/score/include/rtems/score/schedulersmpimpl.h
> b/cpukit/score/include/rtems/score/schedulersmpimpl.h
> index 896b1306ab..e152eb0878 100644
> --- a/cpukit/score/include/rtems/score/schedulersmpimpl.h
> +++ b/cpukit/score/include/rtems/score/schedulersmpimpl.h
> @@ -42,8 +42,8 @@ extern "C" {
> * - @ref SCHEDULER_SMP_NODE_READY.
> *
> * State transitions are triggered via basic operations
> - * - _Scheduler_SMP_Enqueue_ordered(),
> - * - _Scheduler_SMP_Enqueue_scheduled_ordered(), and
> + * - _Scheduler_SMP_Enqueue(),
> + * - _Scheduler_SMP_Enqueue_scheduled(), and
> * - _Scheduler_SMP_Block().
> *
> * @dot
> @@ -296,7 +296,8 @@ typedef void ( *Scheduler_SMP_Extract )(
>
> typedef void ( *Scheduler_SMP_Insert )(
> Scheduler_Context *context,
> - Scheduler_Node *node_to_insert
> + Scheduler_Node *node_to_insert,
> + Priority_Control insert_priority
> );
>
> typedef void ( *Scheduler_SMP_Move )(
> @@ -324,7 +325,8 @@ typedef void ( *Scheduler_SMP_Set_affinity )(
>
> typedef bool ( *Scheduler_SMP_Enqueue )(
> Scheduler_Context *context,
> - Scheduler_Node *node_to_enqueue
> + Scheduler_Node *node_to_enqueue,
> + Priority_Control priority
> );
>
> typedef void ( *Scheduler_SMP_Allocate_processor )(
> @@ -351,7 +353,7 @@ static inline void _Scheduler_SMP_Do_nothing_
> register_idle(
> (void) cpu;
> }
>
> -static inline bool _Scheduler_SMP_Insert_priority_lifo_order(
> +static inline bool _Scheduler_SMP_Priority_less_equal(
> const void *to_insert,
> const Chain_Node *next
> )
> @@ -365,20 +367,6 @@ static inline bool _Scheduler_SMP_Insert_
> priority_lifo_order(
> return *priority_to_insert <= node_next->priority;
> }
>
> -static inline bool _Scheduler_SMP_Insert_priority_fifo_order(
> - const void *to_insert,
> - const Chain_Node *next
> -)
> -{
> - const Priority_Control *priority_to_insert;
> - const Scheduler_SMP_Node *node_next;
> -
> - priority_to_insert = (const Priority_Control *) to_insert;
> - node_next = (const Scheduler_SMP_Node *) next;
> -
> - return *priority_to_insert < node_next->priority;
> -}
> -
> static inline Scheduler_SMP_Context *_Scheduler_SMP_Get_self(
> Scheduler_Context *context
> )
> @@ -637,6 +625,7 @@ static inline Scheduler_Node
> *_Scheduler_SMP_Get_lowest_scheduled(
> static inline void _Scheduler_SMP_Enqueue_to_scheduled(
> Scheduler_Context *context,
> Scheduler_Node *node,
> + Priority_Control priority,
> Scheduler_Node *lowest_scheduled,
> Scheduler_SMP_Insert insert_scheduled,
> Scheduler_SMP_Move move_from_scheduled_to_ready,
> @@ -660,7 +649,7 @@ static inline void _Scheduler_SMP_Enqueue_to_
> scheduled(
> allocate_processor
> );
>
> - ( *insert_scheduled )( context, node );
> + ( *insert_scheduled )( context, node, priority );
> ( *move_from_scheduled_to_ready )( context, lowest_scheduled );
>
> _Scheduler_Release_idle_thread(
> @@ -675,7 +664,7 @@ static inline void _Scheduler_SMP_Enqueue_to_
> scheduled(
> );
> _Scheduler_SMP_Node_change_state( node, SCHEDULER_SMP_NODE_SCHEDULED
> );
>
> - ( *insert_scheduled )( context, node );
> + ( *insert_scheduled )( context, node, priority );
> ( *move_from_scheduled_to_ready )( context, lowest_scheduled );
>
> _Scheduler_Exchange_idle_thread(
> @@ -696,6 +685,7 @@ static inline void _Scheduler_SMP_Enqueue_to_
> scheduled(
> *
> * @param[in] context The scheduler instance context.
> * @param[in] node The node to enqueue.
> + * @param[in] priority The node insert priority.
> * @param[in] order The order function.
> * @param[in] insert_ready Function to insert a node into the set of ready
> * nodes.
> @@ -710,9 +700,10 @@ static inline void _Scheduler_SMP_Enqueue_to_
> scheduled(
> * @param[in] allocate_processor Function to allocate a processor to a
> node
> * based on the rules of the scheduler.
> */
> -static inline bool _Scheduler_SMP_Enqueue_ordered(
> +static inline bool _Scheduler_SMP_Enqueue(
> Scheduler_Context *context,
> Scheduler_Node *node,
> + Priority_Control insert_priority,
> Chain_Node_order order,
> Scheduler_SMP_Insert insert_ready,
> Scheduler_SMP_Insert insert_scheduled,
> @@ -721,17 +712,16 @@ static inline bool _Scheduler_SMP_Enqueue_ordered(
> Scheduler_SMP_Allocate_processor allocate_processor
> )
> {
> - bool needs_help;
> - Scheduler_Node *lowest_scheduled;
> - Priority_Control node_priority;
> + bool needs_help;
> + Scheduler_Node *lowest_scheduled;
>
> lowest_scheduled = ( *get_lowest_scheduled )( context, node );
> - node_priority = _Scheduler_SMP_Node_priority( node );
>
> - if ( ( *order )( &node_priority, &lowest_scheduled->Node.Chain ) ) {
> + if ( ( *order )( &insert_priority, &lowest_scheduled->Node.Chain ) ) {
> _Scheduler_SMP_Enqueue_to_scheduled(
> context,
> node,
> + insert_priority,
> lowest_scheduled,
> insert_scheduled,
> move_from_scheduled_to_ready,
> @@ -739,7 +729,7 @@ static inline bool _Scheduler_SMP_Enqueue_ordered(
> );
> needs_help = false;
> } else {
> - ( *insert_ready )( context, node );
> + ( *insert_ready )( context, node, insert_priority );
> needs_help = true;
> }
>
> @@ -765,9 +755,10 @@ static inline bool _Scheduler_SMP_Enqueue_ordered(
> * @param[in] allocate_processor Function to allocate a processor to a
> node
> * based on the rules of the scheduler.
> */
> -static inline bool _Scheduler_SMP_Enqueue_scheduled_ordered(
> +static inline bool _Scheduler_SMP_Enqueue_scheduled(
> Scheduler_Context *context,
> - Scheduler_Node *node,
> + Scheduler_Node *const node,
> + Priority_Control insert_priority,
> Chain_Node_order order,
> Scheduler_SMP_Extract extract_from_ready,
> Scheduler_SMP_Get_highest_ready get_highest_ready,
> @@ -780,10 +771,8 @@ static inline bool _Scheduler_SMP_Enqueue_
> scheduled_ordered(
> while ( true ) {
> Scheduler_Node *highest_ready;
> Scheduler_Try_to_schedule_action action;
> - Priority_Control node_priority;
>
> highest_ready = ( *get_highest_ready )( context, node );
> - node_priority = _Scheduler_SMP_Node_priority( node );
>
> /*
> * The node has been extracted from the scheduled chain. We have to
> place
> @@ -791,9 +780,9 @@ static inline bool _Scheduler_SMP_Enqueue_
> scheduled_ordered(
> */
> if (
> node->sticky_level > 0
> - && ( *order )( &node_priority, &highest_ready->Node.Chain )
> + && ( *order )( &insert_priority, &highest_ready->Node.Chain )
> ) {
> - ( *insert_scheduled )( context, node );
> + ( *insert_scheduled )( context, node, insert_priority );
>
> if ( _Scheduler_Node_get_idle( node ) != NULL ) {
> Thread_Control *owner;
> @@ -839,7 +828,7 @@ static inline bool _Scheduler_SMP_Enqueue_
> scheduled_ordered(
> allocate_processor
> );
>
> - ( *insert_ready )( context, node );
> + ( *insert_ready )( context, node, insert_priority );
> ( *move_from_ready_to_scheduled )( context, highest_ready );
>
> idle = _Scheduler_Release_idle_thread(
> @@ -855,7 +844,7 @@ static inline bool _Scheduler_SMP_Enqueue_
> scheduled_ordered(
> SCHEDULER_SMP_NODE_SCHEDULED
> );
>
> - ( *insert_ready )( context, node );
> + ( *insert_ready )( context, node, insert_priority );
> ( *move_from_ready_to_scheduled )( context, highest_ready );
>
> _Scheduler_Exchange_idle_thread(
> @@ -1033,7 +1022,7 @@ static inline void _Scheduler_SMP_Unblock(
> Thread_Control *thread,
> Scheduler_Node *node,
> Scheduler_SMP_Update update,
> - Scheduler_SMP_Enqueue enqueue_fifo
> + Scheduler_SMP_Enqueue enqueue
> )
> {
> Scheduler_SMP_Node_state node_state;
> @@ -1049,21 +1038,22 @@ static inline void _Scheduler_SMP_Unblock(
> );
>
> if ( unblock ) {
> - Priority_Control new_priority;
> - bool prepend_it;
> + Priority_Control priority;
> bool needs_help;
>
> - new_priority = _Scheduler_Node_get_priority( node, &prepend_it );
> - (void) prepend_it;
> + priority = _Scheduler_Node_get_priority( node );
> + priority = SCHEDULER_PRIORITY_PURIFY( priority );
>
> - if ( new_priority != _Scheduler_SMP_Node_priority( node ) ) {
> - ( *update )( context, node, new_priority );
> + if ( priority != _Scheduler_SMP_Node_priority( node ) ) {
> + ( *update )( context, node, priority );
> }
>
> if ( node_state == SCHEDULER_SMP_NODE_BLOCKED ) {
> - _Scheduler_SMP_Node_change_state( node, SCHEDULER_SMP_NODE_READY );
> + Priority_Control insert_priority;
>
> - needs_help = ( *enqueue_fifo )( context, node );
> + _Scheduler_SMP_Node_change_state( node, SCHEDULER_SMP_NODE_READY );
> + insert_priority = SCHEDULER_PRIORITY_APPEND( priority );
> + needs_help = ( *enqueue )( context, node, insert_priority );
> } else {
> _Assert( node_state == SCHEDULER_SMP_NODE_READY );
> _Assert( node->sticky_level > 0 );
> @@ -1083,20 +1073,19 @@ static inline void _Scheduler_SMP_Update_priority(
> Scheduler_Node *node,
> Scheduler_SMP_Extract extract_from_ready,
> Scheduler_SMP_Update update,
> - Scheduler_SMP_Enqueue enqueue_fifo,
> - Scheduler_SMP_Enqueue enqueue_lifo,
> - Scheduler_SMP_Enqueue enqueue_scheduled_fifo,
> - Scheduler_SMP_Enqueue enqueue_scheduled_lifo,
> + Scheduler_SMP_Enqueue enqueue,
> + Scheduler_SMP_Enqueue enqueue_scheduled,
> Scheduler_SMP_Ask_for_help ask_for_help
> )
> {
> - Priority_Control new_priority;
> - bool prepend_it;
> + Priority_Control priority;
> + Priority_Control insert_priority;
> Scheduler_SMP_Node_state node_state;
>
> - new_priority = _Scheduler_Node_get_priority( node, &prepend_it );
> + insert_priority = _Scheduler_Node_get_priority( node );
> + priority = SCHEDULER_PRIORITY_PURIFY( insert_priority );
>
> - if ( new_priority == _Scheduler_SMP_Node_priority( node ) ) {
> + if ( priority == _Scheduler_SMP_Node_priority( node ) ) {
> if ( _Thread_Is_ready( thread ) ) {
> ( *ask_for_help )( context, thread, node );
> }
> @@ -1108,26 +1097,14 @@ static inline void _Scheduler_SMP_Update_priority(
>
> if ( node_state == SCHEDULER_SMP_NODE_SCHEDULED ) {
> _Scheduler_SMP_Extract_from_scheduled( node );
> -
> - ( *update )( context, node, new_priority );
> -
> - if ( prepend_it ) {
> - ( *enqueue_scheduled_lifo )( context, node );
> - } else {
> - ( *enqueue_scheduled_fifo )( context, node );
> - }
> + ( *update )( context, node, priority );
> + ( *enqueue_scheduled )( context, node, insert_priority );
> } else if ( node_state == SCHEDULER_SMP_NODE_READY ) {
> ( *extract_from_ready )( context, node );
> -
> - ( *update )( context, node, new_priority );
> -
> - if ( prepend_it ) {
> - ( *enqueue_lifo )( context, node );
> - } else {
> - ( *enqueue_fifo )( context, node );
> - }
> + ( *update )( context, node, priority );
> + ( *enqueue )( context, node, insert_priority );
> } else {
> - ( *update )( context, node, new_priority );
> + ( *update )( context, node, priority );
>
> if ( _Thread_Is_ready( thread ) ) {
> ( *ask_for_help )( context, thread, node );
> @@ -1140,23 +1117,26 @@ static inline void _Scheduler_SMP_Yield(
> Thread_Control *thread,
> Scheduler_Node *node,
> Scheduler_SMP_Extract extract_from_ready,
> - Scheduler_SMP_Enqueue enqueue_fifo,
> - Scheduler_SMP_Enqueue enqueue_scheduled_fifo
> + Scheduler_SMP_Enqueue enqueue,
> + Scheduler_SMP_Enqueue enqueue_scheduled
> )
> {
> bool needs_help;
> Scheduler_SMP_Node_state node_state;
> + Priority_Control insert_priority;
>
> node_state = _Scheduler_SMP_Node_state( node );
> + insert_priority = _Scheduler_SMP_Node_priority( node );
> + insert_priority = SCHEDULER_PRIORITY_APPEND( insert_priority );
>
> if ( node_state == SCHEDULER_SMP_NODE_SCHEDULED ) {
> _Scheduler_SMP_Extract_from_scheduled( node );
> - ( *enqueue_scheduled_fifo )( context, node );
> + ( *enqueue_scheduled )( context, node, insert_priority );
> needs_help = false;
> } else if ( node_state == SCHEDULER_SMP_NODE_READY ) {
> ( *extract_from_ready )( context, node );
>
> - needs_help = ( *enqueue_fifo )( context, node );
> + needs_help = ( *enqueue )( context, node, insert_priority );
> } else {
> needs_help = true;
> }
> @@ -1166,41 +1146,21 @@ static inline void _Scheduler_SMP_Yield(
> }
> }
>
> -static inline void _Scheduler_SMP_Insert_scheduled_lifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node_to_insert
> -)
> -{
> - Scheduler_SMP_Context *self;
> - Priority_Control priority_to_insert;
> -
> - self = _Scheduler_SMP_Get_self( context );
> - priority_to_insert = _Scheduler_SMP_Node_priority( node_to_insert );
> -
> - _Chain_Insert_ordered_unprotected(
> - &self->Scheduled,
> - &node_to_insert->Node.Chain,
> - &priority_to_insert,
> - _Scheduler_SMP_Insert_priority_lifo_order
> - );
> -}
> -
> -static inline void _Scheduler_SMP_Insert_scheduled_fifo(
> +static inline void _Scheduler_SMP_Insert_scheduled(
> Scheduler_Context *context,
> - Scheduler_Node *node_to_insert
> + Scheduler_Node *node_to_insert,
> + Priority_Control priority_to_insert
> )
> {
> Scheduler_SMP_Context *self;
> - Priority_Control priority_to_insert;
>
> self = _Scheduler_SMP_Get_self( context );
> - priority_to_insert = _Scheduler_SMP_Node_priority( node_to_insert );
>
> _Chain_Insert_ordered_unprotected(
> &self->Scheduled,
> &node_to_insert->Node.Chain,
> &priority_to_insert,
> - _Scheduler_SMP_Insert_priority_fifo_order
> + _Scheduler_SMP_Priority_less_equal
> );
> }
>
> @@ -1230,11 +1190,11 @@ static inline bool _Scheduler_SMP_Ask_for_help(
> node_state = _Scheduler_SMP_Node_state( node );
>
> if ( node_state == SCHEDULER_SMP_NODE_BLOCKED ) {
> - Priority_Control node_priority;
> + Priority_Control insert_priority;
>
> - node_priority = _Scheduler_SMP_Node_priority( node );
> + insert_priority = _Scheduler_SMP_Node_priority( node );
>
> - if ( ( *order )( &node_priority, &lowest_scheduled->Node.Chain ) ) {
> + if ( ( *order )( &insert_priority, &lowest_scheduled->Node.Chain )
> ) {
> _Thread_Scheduler_cancel_need_for_help(
> thread,
> _Thread_Get_CPU( thread )
> @@ -1249,7 +1209,7 @@ static inline bool _Scheduler_SMP_Ask_for_help(
> allocate_processor
> );
>
> - ( *insert_scheduled )( context, node );
> + ( *insert_scheduled )( context, node, insert_priority );
> ( *move_from_scheduled_to_ready )( context, lowest_scheduled );
>
> _Scheduler_Release_idle_thread(
> @@ -1261,7 +1221,7 @@ static inline bool _Scheduler_SMP_Ask_for_help(
> } else {
> _Thread_Scheduler_release_critical( thread, &lock_context );
> _Scheduler_SMP_Node_change_state( node, SCHEDULER_SMP_NODE_READY
> );
> - ( *insert_ready )( context, node );
> + ( *insert_ready )( context, node, insert_priority );
> success = false;
> }
> } else if ( node_state == SCHEDULER_SMP_NODE_SCHEDULED ) {
> @@ -1384,7 +1344,7 @@ static inline void _Scheduler_SMP_Add_processor(
> Scheduler_Context *context,
> Thread_Control *idle,
> Scheduler_SMP_Has_ready has_ready,
> - Scheduler_SMP_Enqueue enqueue_scheduled_fifo,
> + Scheduler_SMP_Enqueue enqueue_scheduled,
> Scheduler_SMP_Register_idle register_idle
> )
> {
> @@ -1399,7 +1359,11 @@ static inline void _Scheduler_SMP_Add_processor(
> ( *register_idle )( context, node, _Thread_Get_CPU( idle ) );
>
> if ( ( *has_ready )( &self->Base ) ) {
> - ( *enqueue_scheduled_fifo )( &self->Base, node );
> + Priority_Control insert_priority;
> +
> + insert_priority = _Scheduler_SMP_Node_priority( node );
> + insert_priority = SCHEDULER_PRIORITY_APPEND( insert_priority );
> + ( *enqueue_scheduled )( &self->Base, node, insert_priority );
> } else {
> _Chain_Append_unprotected( &self->Scheduled, &node->Node.Chain );
> }
> @@ -1409,7 +1373,7 @@ static inline Thread_Control *_Scheduler_SMP_Remove_
> processor(
> Scheduler_Context *context,
> Per_CPU_Control *cpu,
> Scheduler_SMP_Extract extract_from_ready,
> - Scheduler_SMP_Enqueue enqueue_fifo
> + Scheduler_SMP_Enqueue enqueue
> )
> {
> Scheduler_SMP_Context *self;
> @@ -1451,7 +1415,11 @@ static inline Thread_Control *_Scheduler_SMP_Remove_
> processor(
> );
>
> if ( !_Chain_Is_empty( &self->Scheduled ) ) {
> - ( *enqueue_fifo )( context, victim_node );
> + Priority_Control insert_priority;
> +
> + insert_priority = _Scheduler_SMP_Node_priority( victim_node );
> + insert_priority = SCHEDULER_PRIORITY_APPEND( insert_priority );
> + ( *enqueue )( context, victim_node, insert_priority );
> }
> } else {
> _Assert( victim_owner == victim_user );
> @@ -1472,13 +1440,16 @@ static inline void _Scheduler_SMP_Set_affinity(
> Scheduler_SMP_Extract extract_from_ready,
> Scheduler_SMP_Get_highest_ready get_highest_ready,
> Scheduler_SMP_Move move_from_ready_to_scheduled,
> - Scheduler_SMP_Enqueue enqueue_fifo,
> + Scheduler_SMP_Enqueue enqueue,
> Scheduler_SMP_Allocate_processor allocate_processor
> )
> {
> Scheduler_SMP_Node_state node_state;
> + Priority_Control insert_priority;
>
> node_state = _Scheduler_SMP_Node_state( node );
> + insert_priority = _Scheduler_SMP_Node_priority( node );
> + insert_priority = SCHEDULER_PRIORITY_APPEND( insert_priority );
>
> if ( node_state == SCHEDULER_SMP_NODE_SCHEDULED ) {
> _Scheduler_SMP_Extract_from_scheduled( node );
> @@ -1492,11 +1463,11 @@ static inline void _Scheduler_SMP_Set_affinity(
> allocate_processor
> );
> ( *set_affinity )( context, node, arg );
> - ( *enqueue_fifo )( context, node );
> + ( *enqueue )( context, node, insert_priority );
> } else if ( node_state == SCHEDULER_SMP_NODE_READY ) {
> ( *extract_from_ready )( context, node );
> ( *set_affinity )( context, node, arg );
> - ( *enqueue_fifo )( context, node );
> + ( *enqueue )( context, node, insert_priority );
> } else {
> ( *set_affinity )( context, node, arg );
> }
> diff --git a/cpukit/score/src/schedulercbsunblock.c b/cpukit/score/src/
> schedulercbsunblock.c
> index 403435eeb1..9b7a0ca424 100644
> --- a/cpukit/score/src/schedulercbsunblock.c
> +++ b/cpukit/score/src/schedulercbsunblock.c
> @@ -34,12 +34,11 @@ void _Scheduler_CBS_Unblock(
> Scheduler_CBS_Node *the_node;
> Scheduler_CBS_Server *serv_info;
> Priority_Control priority;
> - bool prepend_it;
>
> the_node = _Scheduler_CBS_Node_downcast( node );
> serv_info = the_node->cbs_server;
> - priority = _Scheduler_Node_get_priority( &the_node->Base.Base,
> &prepend_it );
> - (void) prepend_it;
> + priority = _Scheduler_Node_get_priority( &the_node->Base.Base );
> + priority = SCHEDULER_PRIORITY_PURIFY( priority );
>
> /*
> * Late unblock rule for deadline-driven tasks. The remaining time to
> diff --git a/cpukit/score/src/schedulerdefaultmappriority.c
> b/cpukit/score/src/schedulerdefaultmappriority.c
> index 37a600011e..228549f20d 100644
> --- a/cpukit/score/src/schedulerdefaultmappriority.c
> +++ b/cpukit/score/src/schedulerdefaultmappriority.c
> @@ -1,5 +1,5 @@
> /*
> - * Copyright (c) 2016 embedded brains GmbH
> + * Copyright (c) 2016, 2017 embedded brains GmbH
> *
> * The license and distribution terms for this file may be
> * found in the file LICENSE in this distribution or at
> @@ -10,12 +10,20 @@
> #include "config.h"
> #endif
>
> -#include <rtems/score/scheduler.h>
> +#include <rtems/score/schedulerimpl.h>
>
> Priority_Control _Scheduler_default_Map_priority(
> const Scheduler_Control *scheduler,
> Priority_Control priority
> )
> {
> - return priority;
> + return SCHEDULER_PRIORITY_MAP( priority );
> +}
> +
> +Priority_Control _Scheduler_default_Unmap_priority(
> + const Scheduler_Control *scheduler,
> + Priority_Control priority
> +)
> +{
> + return SCHEDULER_PRIORITY_UNMAP( priority );
> }
> diff --git a/cpukit/score/src/scheduleredfchangepriority.c
> b/cpukit/score/src/scheduleredfchangepriority.c
> index 23382973cc..d3d1f94cbf 100644
> --- a/cpukit/score/src/scheduleredfchangepriority.c
> +++ b/cpukit/score/src/scheduleredfchangepriority.c
> @@ -29,7 +29,7 @@ void _Scheduler_EDF_Update_priority(
> Scheduler_EDF_Context *context;
> Scheduler_EDF_Node *the_node;
> Priority_Control priority;
> - bool prepend_it;
> + Priority_Control insert_priority;
>
> if ( !_Thread_Is_ready( the_thread ) ) {
> /* Nothing to do */
> @@ -37,7 +37,8 @@ void _Scheduler_EDF_Update_priority(
> }
>
> the_node = _Scheduler_EDF_Node_downcast( node );
> - priority = _Scheduler_Node_get_priority( &the_node->Base, &prepend_it );
> + insert_priority = _Scheduler_Node_get_priority( &the_node->Base );
> + priority = SCHEDULER_PRIORITY_PURIFY( insert_priority );
>
> if ( priority == the_node->priority ) {
> /* Nothing to do */
> @@ -48,12 +49,6 @@ void _Scheduler_EDF_Update_priority(
> context = _Scheduler_EDF_Get_context( scheduler );
>
> _Scheduler_EDF_Extract( context, the_node );
> -
> - if ( prepend_it ) {
> - _Scheduler_EDF_Enqueue_first( context, the_node, priority );
> - } else {
> - _Scheduler_EDF_Enqueue( context, the_node, priority );
> - }
> -
> + _Scheduler_EDF_Enqueue( context, the_node, insert_priority );
> _Scheduler_EDF_Schedule_body( scheduler, the_thread, false );
> }
> diff --git a/cpukit/score/src/scheduleredfreleasejob.c b/cpukit/score/src/
> scheduleredfreleasejob.c
> index 068a0db7a3..9c30821e9e 100644
> --- a/cpukit/score/src/scheduleredfreleasejob.c
> +++ b/cpukit/score/src/scheduleredfreleasejob.c
> @@ -25,7 +25,7 @@ Priority_Control _Scheduler_EDF_Map_priority(
> Priority_Control priority
> )
> {
> - return SCHEDULER_EDF_PRIO_MSB | priority;
> + return SCHEDULER_EDF_PRIO_MSB | SCHEDULER_PRIORITY_MAP( priority );
> }
>
> Priority_Control _Scheduler_EDF_Unmap_priority(
> @@ -33,7 +33,7 @@ Priority_Control _Scheduler_EDF_Unmap_priority(
> Priority_Control priority
> )
> {
> - return priority & ~SCHEDULER_EDF_PRIO_MSB;
> + return SCHEDULER_PRIORITY_UNMAP( priority & ~SCHEDULER_EDF_PRIO_MSB );
> }
>
> void _Scheduler_EDF_Release_job(
> diff --git a/cpukit/score/src/scheduleredfsmp.c b/cpukit/score/src/
> scheduleredfsmp.c
> index badee44e2e..102a33d4f7 100644
> --- a/cpukit/score/src/scheduleredfsmp.c
> +++ b/cpukit/score/src/scheduleredfsmp.c
> @@ -39,26 +39,7 @@ _Scheduler_EDF_SMP_Node_downcast( Scheduler_Node *node
> )
> return (Scheduler_EDF_SMP_Node *) node;
> }
>
> -static inline bool _Scheduler_EDF_SMP_Less(
> - const void *left,
> - const RBTree_Node *right
> -)
> -{
> - const Priority_Control *the_left;
> - const Scheduler_SMP_Node *the_right;
> - Priority_Control prio_left;
> - Priority_Control prio_right;
> -
> - the_left = left;
> - the_right = RTEMS_CONTAINER_OF( right, Scheduler_SMP_Node,
> Base.Node.RBTree );
> -
> - prio_left = *the_left;
> - prio_right = the_right->priority;
> -
> - return prio_left < prio_right;
> -}
> -
> -static inline bool _Scheduler_EDF_SMP_Less_or_equal(
> +static inline bool _Scheduler_EDF_SMP_Priority_less_equal(
> const void *left,
> const RBTree_Node *right
> )
> @@ -254,20 +235,21 @@ static inline Scheduler_Node *_Scheduler_EDF_SMP_Get_
> lowest_scheduled(
> static inline void _Scheduler_EDF_SMP_Insert_ready(
> Scheduler_Context *context,
> Scheduler_Node *node_base,
> - int generation_index,
> - bool ( *less )( const void *, const RBTree_Node * )
> + Priority_Control insert_priority
> )
> {
> Scheduler_EDF_SMP_Context *self;
> Scheduler_EDF_SMP_Node *node;
> uint32_t rqi;
> Scheduler_EDF_SMP_Ready_queue *ready_queue;
> + int generation_index;
> int increment;
> int64_t generation;
>
> self = _Scheduler_EDF_SMP_Get_self( context );
> node = _Scheduler_EDF_SMP_Node_downcast( node_base );
> rqi = node->ready_queue_index;
> + generation_index = SCHEDULER_PRIORITY_IS_APPEND( insert_priority );
> increment = ( generation_index << 1 ) - 1;
> ready_queue = &self->Ready[ rqi ];
>
> @@ -279,8 +261,8 @@ static inline void _Scheduler_EDF_SMP_Insert_ready(
> _RBTree_Insert_inline(
> &ready_queue->Queue,
> &node->Base.Base.Node.RBTree,
> - &node->Base.priority,
> - less
> + &insert_priority,
> + _Scheduler_EDF_SMP_Priority_less_equal
> );
>
> if ( rqi != 0 && _Chain_Is_node_off_chain( &ready_queue->Node ) ) {
> @@ -327,12 +309,14 @@ static inline void _Scheduler_EDF_SMP_Move_from_
> scheduled_to_ready(
> Scheduler_Node *scheduled_to_ready
> )
> {
> + Priority_Control insert_priority;
> +
> _Chain_Extract_unprotected( &scheduled_to_ready->Node.Chain );
> + insert_priority = _Scheduler_SMP_Node_priority( scheduled_to_ready );
> _Scheduler_EDF_SMP_Insert_ready(
> context,
> scheduled_to_ready,
> - 1,
> - _Scheduler_EDF_SMP_Less
> + insert_priority
> );
> }
>
> @@ -341,33 +325,15 @@ static inline void _Scheduler_EDF_SMP_Move_from_
> ready_to_scheduled(
> Scheduler_Node *ready_to_scheduled
> )
> {
> - _Scheduler_EDF_SMP_Extract_from_ready( context, ready_to_scheduled );
> - _Scheduler_SMP_Insert_scheduled_fifo( context, ready_to_scheduled );
> -}
> + Priority_Control insert_priority;
>
> -static inline void _Scheduler_EDF_SMP_Insert_ready_lifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node_to_insert
> -)
> -{
> - _Scheduler_EDF_SMP_Insert_ready(
> - context,
> - node_to_insert,
> - 0,
> - _Scheduler_EDF_SMP_Less_or_equal
> - );
> -}
> -
> -static inline void _Scheduler_EDF_SMP_Insert_ready_fifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node_to_insert
> -)
> -{
> - _Scheduler_EDF_SMP_Insert_ready(
> + _Scheduler_EDF_SMP_Extract_from_ready( context, ready_to_scheduled );
> + insert_priority = _Scheduler_SMP_Node_priority( ready_to_scheduled );
> + insert_priority = SCHEDULER_PRIORITY_APPEND( insert_priority );
> + _Scheduler_SMP_Insert_scheduled(
> context,
> - node_to_insert,
> - 1,
> - _Scheduler_EDF_SMP_Less
> + ready_to_scheduled,
> + insert_priority
> );
> }
>
> @@ -444,103 +410,45 @@ void _Scheduler_EDF_SMP_Block(
> );
> }
>
> -static inline bool _Scheduler_EDF_SMP_Enqueue_ordered(
> - Scheduler_Context *context,
> - Scheduler_Node *node,
> - Chain_Node_order order,
> - Scheduler_SMP_Insert insert_ready,
> - Scheduler_SMP_Insert insert_scheduled
> +static inline bool _Scheduler_EDF_SMP_Enqueue(
> + Scheduler_Context *context,
> + Scheduler_Node *node,
> + Priority_Control insert_priority
> )
> {
> - return _Scheduler_SMP_Enqueue_ordered(
> + return _Scheduler_SMP_Enqueue(
> context,
> node,
> - order,
> - insert_ready,
> - insert_scheduled,
> + insert_priority,
> + _Scheduler_SMP_Priority_less_equal,
> + _Scheduler_EDF_SMP_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_EDF_SMP_Move_from_scheduled_to_ready,
> _Scheduler_EDF_SMP_Get_lowest_scheduled,
> _Scheduler_EDF_SMP_Allocate_processor
> );
> }
>
> -static inline bool _Scheduler_EDF_SMP_Enqueue_lifo(
> +static inline bool _Scheduler_EDF_SMP_Enqueue_scheduled(
> Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_EDF_SMP_Enqueue_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_EDF_SMP_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo
> - );
> -}
> -
> -static inline bool _Scheduler_EDF_SMP_Enqueue_fifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_EDF_SMP_Enqueue_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_fifo_order,
> - _Scheduler_EDF_SMP_Insert_ready_fifo,
> - _Scheduler_SMP_Insert_scheduled_fifo
> - );
> -}
> -
> -static inline bool _Scheduler_EDF_SMP_Enqueue_scheduled_ordered(
> - Scheduler_Context *context,
> - Scheduler_Node *node,
> - Chain_Node_order order,
> - Scheduler_SMP_Insert insert_ready,
> - Scheduler_SMP_Insert insert_scheduled
> + Scheduler_Node *node,
> + Priority_Control insert_priority
> )
> {
> - return _Scheduler_SMP_Enqueue_scheduled_ordered(
> + return _Scheduler_SMP_Enqueue_scheduled(
> context,
> node,
> - order,
> + insert_priority,
> + _Scheduler_SMP_Priority_less_equal,
> _Scheduler_EDF_SMP_Extract_from_ready,
> _Scheduler_EDF_SMP_Get_highest_ready,
> - insert_ready,
> - insert_scheduled,
> + _Scheduler_EDF_SMP_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_EDF_SMP_Move_from_ready_to_scheduled,
> _Scheduler_EDF_SMP_Allocate_processor
> );
> }
>
> -static inline bool _Scheduler_EDF_SMP_Enqueue_scheduled_lifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_EDF_SMP_Enqueue_scheduled_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_EDF_SMP_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo
> - );
> -}
> -
> -static inline bool _Scheduler_EDF_SMP_Enqueue_scheduled_fifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_EDF_SMP_Enqueue_scheduled_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_fifo_order,
> - _Scheduler_EDF_SMP_Insert_ready_fifo,
> - _Scheduler_SMP_Insert_scheduled_fifo
> - );
> -}
> -
> void _Scheduler_EDF_SMP_Unblock(
> const Scheduler_Control *scheduler,
> Thread_Control *thread,
> @@ -554,7 +462,7 @@ void _Scheduler_EDF_SMP_Unblock(
> thread,
> node,
> _Scheduler_EDF_SMP_Do_update,
> - _Scheduler_EDF_SMP_Enqueue_fifo
> + _Scheduler_EDF_SMP_Enqueue
> );
> }
>
> @@ -568,9 +476,9 @@ static inline bool _Scheduler_EDF_SMP_Do_ask_for_help(
> context,
> the_thread,
> node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_EDF_SMP_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo,
> + _Scheduler_SMP_Priority_less_equal,
> + _Scheduler_EDF_SMP_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_EDF_SMP_Move_from_scheduled_to_ready,
> _Scheduler_EDF_SMP_Get_lowest_scheduled,
> _Scheduler_EDF_SMP_Allocate_processor
> @@ -591,10 +499,8 @@ void _Scheduler_EDF_SMP_Update_priority(
> node,
> _Scheduler_EDF_SMP_Extract_from_ready,
> _Scheduler_EDF_SMP_Do_update,
> - _Scheduler_EDF_SMP_Enqueue_fifo,
> - _Scheduler_EDF_SMP_Enqueue_lifo,
> - _Scheduler_EDF_SMP_Enqueue_scheduled_fifo,
> - _Scheduler_EDF_SMP_Enqueue_scheduled_lifo,
> + _Scheduler_EDF_SMP_Enqueue,
> + _Scheduler_EDF_SMP_Enqueue_scheduled,
> _Scheduler_EDF_SMP_Do_ask_for_help
> );
> }
> @@ -672,7 +578,7 @@ void _Scheduler_EDF_SMP_Add_processor(
> context,
> idle,
> _Scheduler_EDF_SMP_Has_ready,
> - _Scheduler_EDF_SMP_Enqueue_scheduled_fifo,
> + _Scheduler_EDF_SMP_Enqueue_scheduled,
> _Scheduler_EDF_SMP_Register_idle
> );
> }
> @@ -688,7 +594,7 @@ Thread_Control *_Scheduler_EDF_SMP_Remove_processor(
> context,
> cpu,
> _Scheduler_EDF_SMP_Extract_from_ready,
> - _Scheduler_EDF_SMP_Enqueue_fifo
> + _Scheduler_EDF_SMP_Enqueue
> );
> }
>
> @@ -705,8 +611,8 @@ void _Scheduler_EDF_SMP_Yield(
> thread,
> node,
> _Scheduler_EDF_SMP_Extract_from_ready,
> - _Scheduler_EDF_SMP_Enqueue_fifo,
> - _Scheduler_EDF_SMP_Enqueue_scheduled_fifo
> + _Scheduler_EDF_SMP_Enqueue,
> + _Scheduler_EDF_SMP_Enqueue_scheduled
> );
> }
>
> @@ -777,7 +683,7 @@ bool _Scheduler_EDF_SMP_Set_affinity(
> _Scheduler_EDF_SMP_Extract_from_ready,
> _Scheduler_EDF_SMP_Get_highest_ready,
> _Scheduler_EDF_SMP_Move_from_ready_to_scheduled,
> - _Scheduler_EDF_SMP_Enqueue_fifo,
> + _Scheduler_EDF_SMP_Enqueue,
> _Scheduler_EDF_SMP_Allocate_processor
> );
>
> diff --git a/cpukit/score/src/scheduleredfunblock.c b/cpukit/score/src/
> scheduleredfunblock.c
> index 29355d04fa..91295f511c 100644
> --- a/cpukit/score/src/scheduleredfunblock.c
> +++ b/cpukit/score/src/scheduleredfunblock.c
> @@ -31,15 +31,16 @@ void _Scheduler_EDF_Unblock(
> Scheduler_EDF_Context *context;
> Scheduler_EDF_Node *the_node;
> Priority_Control priority;
> - bool prepend_it;
> + Priority_Control insert_priority;
>
> context = _Scheduler_EDF_Get_context( scheduler );
> the_node = _Scheduler_EDF_Node_downcast( node );
> - priority = _Scheduler_Node_get_priority( &the_node->Base, &prepend_it );
> - (void) prepend_it;
> + priority = _Scheduler_Node_get_priority( &the_node->Base );
> + priority = SCHEDULER_PRIORITY_PURIFY( priority );
> + insert_priority = SCHEDULER_PRIORITY_APPEND( priority );
>
> the_node->priority = priority;
> - _Scheduler_EDF_Enqueue( context, the_node, priority );
> + _Scheduler_EDF_Enqueue( context, the_node, insert_priority );
>
> /*
> * If the thread that was unblocked is more important than the heir,
> diff --git a/cpukit/score/src/schedulerpriority.c b/cpukit/score/src/
> schedulerpriority.c
> index ddfd973e0a..5ac16a49a1 100644
> --- a/cpukit/score/src/schedulerpriority.c
> +++ b/cpukit/score/src/schedulerpriority.c
> @@ -19,7 +19,6 @@
> #endif
>
> #include <rtems/score/schedulerpriorityimpl.h>
> -#include <rtems/score/wkspace.h>
>
> void _Scheduler_priority_Initialize( const Scheduler_Control *scheduler )
> {
> @@ -49,7 +48,7 @@ void _Scheduler_priority_Node_initialize(
> the_node = _Scheduler_priority_Node_downcast( node );
> _Scheduler_priority_Ready_queue_update(
> &the_node->Ready_queue,
> - priority,
> + SCHEDULER_PRIORITY_UNMAP( priority ),
> &context->Bit_map,
> &context->Ready[ 0 ]
> );
> diff --git a/cpukit/score/src/schedulerpriorityaffinitysmp.c
> b/cpukit/score/src/schedulerpriorityaffinitysmp.c
> index 72b4ffb600..4808c84c3f 100644
> --- a/cpukit/score/src/schedulerpriorityaffinitysmp.c
> +++ b/cpukit/score/src/schedulerpriorityaffinitysmp.c
> @@ -39,22 +39,13 @@
> * + _Scheduler_priority_SMP_Do_update
> */
>
> -static bool _Scheduler_priority_affinity_SMP_Insert_priority_lifo_order(
> +static bool _Scheduler_priority_affinity_SMP_Priority_less_equal(
> const void *to_insert,
> const Chain_Node *next
> )
> {
> return next != NULL
> - && _Scheduler_SMP_Insert_priority_lifo_order( to_insert, next );
> -}
> -
> -static bool _Scheduler_priority_affinity_SMP_Insert_priority_fifo_order(
> - const void *to_insert,
> - const Chain_Node *next
> -)
> -{
> - return next != NULL
> - && _Scheduler_SMP_Insert_priority_fifo_order( to_insert, next );
> + && _Scheduler_SMP_Priority_less_equal( to_insert, next );
> }
>
> static Scheduler_priority_affinity_SMP_Node *
> @@ -242,19 +233,21 @@ static Scheduler_Node * _Scheduler_priority_affinity_
> SMP_Get_lowest_scheduled(
> /*
> * This method is unique to this scheduler because it must pass
> * _Scheduler_priority_affinity_SMP_Get_lowest_scheduled into
> - * _Scheduler_SMP_Enqueue_ordered.
> + * _Scheduler_SMP_Enqueue.
> */
> static bool _Scheduler_priority_affinity_SMP_Enqueue_fifo(
> Scheduler_Context *context,
> - Scheduler_Node *node
> + Scheduler_Node *node,
> + Priority_Control insert_priority
> )
> {
> - return _Scheduler_SMP_Enqueue_ordered(
> + return _Scheduler_SMP_Enqueue(
> context,
> node,
> - _Scheduler_priority_affinity_SMP_Insert_priority_fifo_order,
> - _Scheduler_priority_SMP_Insert_ready_fifo,
> - _Scheduler_SMP_Insert_scheduled_fifo,
> + insert_priority,
> + _Scheduler_priority_affinity_SMP_Priority_less_equal,
> + _Scheduler_priority_SMP_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_priority_SMP_Move_from_scheduled_to_ready,
> _Scheduler_priority_affinity_SMP_Get_lowest_scheduled,
> _Scheduler_SMP_Allocate_processor_exact
> @@ -280,6 +273,7 @@ static void _Scheduler_priority_affinity_
> SMP_Check_for_migrations(
>
> while (1) {
> Priority_Control lowest_scheduled_priority;
> + Priority_Control insert_priority;
>
> if ( _Priority_bit_map_Is_empty( &self->Bit_map ) ) {
> /* Nothing to do */
> @@ -312,7 +306,7 @@ static void _Scheduler_priority_affinity_
> SMP_Check_for_migrations(
> _Scheduler_SMP_Node_priority( lowest_scheduled );
>
> if (
> - _Scheduler_SMP_Insert_priority_lifo_order(
> + _Scheduler_SMP_Priority_less_equal(
> &lowest_scheduled_priority,
> &highest_ready->Node.Chain
> )
> @@ -326,11 +320,14 @@ static void _Scheduler_priority_affinity_
> SMP_Check_for_migrations(
> */
>
> _Scheduler_priority_SMP_Extract_from_ready( context, highest_ready );
> + insert_priority = _Scheduler_SMP_Node_priority( highest_ready );
> + insert_priority = SCHEDULER_PRIORITY_APPEND( insert_priority );
> _Scheduler_SMP_Enqueue_to_scheduled(
> context,
> highest_ready,
> + insert_priority,
> lowest_scheduled,
> - _Scheduler_SMP_Insert_scheduled_fifo,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_priority_SMP_Move_from_scheduled_to_ready,
> _Scheduler_SMP_Allocate_processor_exact
> );
> @@ -364,22 +361,21 @@ void _Scheduler_priority_affinity_SMP_Unblock(
>
> /*
> * This is unique to this scheduler because it passes scheduler specific
> - * get_lowest_scheduled helper to _Scheduler_SMP_Enqueue_ordered.
> + * get_lowest_scheduled helper to _Scheduler_SMP_Enqueue.
> */
> -static bool _Scheduler_priority_affinity_SMP_Enqueue_ordered(
> - Scheduler_Context *context,
> - Scheduler_Node *node,
> - Chain_Node_order order,
> - Scheduler_SMP_Insert insert_ready,
> - Scheduler_SMP_Insert insert_scheduled
> +static bool _Scheduler_priority_affinity_SMP_Enqueue(
> + Scheduler_Context *context,
> + Scheduler_Node *node,
> + Priority_Control insert_priority
> )
> {
> - return _Scheduler_SMP_Enqueue_ordered(
> + return _Scheduler_SMP_Enqueue(
> context,
> node,
> - order,
> - insert_ready,
> - insert_scheduled,
> + insert_priority,
> + _Scheduler_priority_affinity_SMP_Priority_less_equal,
> + _Scheduler_priority_SMP_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_priority_SMP_Move_from_scheduled_to_ready,
> _Scheduler_priority_affinity_SMP_Get_lowest_scheduled,
> _Scheduler_SMP_Allocate_processor_exact
> @@ -387,88 +383,30 @@ static bool _Scheduler_priority_affinity_
> SMP_Enqueue_ordered(
> }
>
> /*
> - * This is unique to this scheduler because it is on the path
> - * to _Scheduler_priority_affinity_SMP_Enqueue_ordered() which
> - * invokes a scheduler unique get_lowest_scheduled helper.
> - */
> -static bool _Scheduler_priority_affinity_SMP_Enqueue_lifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_priority_affinity_SMP_Enqueue_ordered(
> - context,
> - node,
> - _Scheduler_priority_affinity_SMP_Insert_priority_lifo_order,
> - _Scheduler_priority_SMP_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo
> - );
> -}
> -
> -/*
> * This method is unique to this scheduler because it must
> - * invoke _Scheduler_SMP_Enqueue_scheduled_ordered() with
> + * invoke _Scheduler_SMP_Enqueue_scheduled() with
> * this scheduler's get_highest_ready() helper.
> */
> -static bool _Scheduler_priority_affinity_SMP_Enqueue_scheduled_ordered(
> - Scheduler_Context *context,
> - Scheduler_Node *node,
> - Chain_Node_order order,
> - Scheduler_SMP_Insert insert_ready,
> - Scheduler_SMP_Insert insert_scheduled
> +static bool _Scheduler_priority_affinity_SMP_Enqueue_scheduled(
> + Scheduler_Context *context,
> + Scheduler_Node *node,
> + Priority_Control insert_priority
> )
> {
> - return _Scheduler_SMP_Enqueue_scheduled_ordered(
> + return _Scheduler_SMP_Enqueue_scheduled(
> context,
> node,
> - order,
> + insert_priority,
> + _Scheduler_SMP_Priority_less_equal,
> _Scheduler_priority_SMP_Extract_from_ready,
> _Scheduler_priority_affinity_SMP_Get_highest_ready,
> - insert_ready,
> - insert_scheduled,
> + _Scheduler_priority_SMP_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_priority_SMP_Move_from_ready_to_scheduled,
> _Scheduler_SMP_Allocate_processor_exact
> );
> }
>
> -/*
> - * This is unique to this scheduler because it is on the path
> - * to _Scheduler_priority_affinity_SMP_Enqueue_scheduled__ordered()
> which
> - * invokes a scheduler unique get_lowest_scheduled helper.
> - */
> -static bool _Scheduler_priority_affinity_SMP_Enqueue_scheduled_lifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_priority_affinity_SMP_Enqueue_scheduled_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_priority_SMP_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo
> - );
> -}
> -
> -/*
> - * This is unique to this scheduler because it is on the path
> - * to _Scheduler_priority_affinity_SMP_Enqueue_scheduled__ordered()
> which
> - * invokes a scheduler unique get_lowest_scheduled helper.
> - */
> -static bool _Scheduler_priority_affinity_SMP_Enqueue_scheduled_fifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_priority_affinity_SMP_Enqueue_scheduled_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_fifo_order,
> - _Scheduler_priority_SMP_Insert_ready_fifo,
> - _Scheduler_SMP_Insert_scheduled_fifo
> - );
> -}
> -
> static bool _Scheduler_priority_affinity_SMP_Do_ask_for_help(
> Scheduler_Context *context,
> Thread_Control *the_thread,
> @@ -479,9 +417,9 @@ static bool _Scheduler_priority_affinity_
> SMP_Do_ask_for_help(
> context,
> the_thread,
> node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_priority_SMP_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo,
> + _Scheduler_SMP_Priority_less_equal,
> + _Scheduler_priority_SMP_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_priority_SMP_Move_from_scheduled_to_ready,
> _Scheduler_SMP_Get_lowest_scheduled,
> _Scheduler_SMP_Allocate_processor_lazy
> @@ -502,10 +440,8 @@ void _Scheduler_priority_affinity_
> SMP_Update_priority(
> node,
> _Scheduler_priority_SMP_Extract_from_ready,
> _Scheduler_priority_SMP_Do_update,
> - _Scheduler_priority_affinity_SMP_Enqueue_fifo,
> - _Scheduler_priority_affinity_SMP_Enqueue_lifo,
> - _Scheduler_priority_affinity_SMP_Enqueue_scheduled_fifo,
> - _Scheduler_priority_affinity_SMP_Enqueue_scheduled_lifo,
> + _Scheduler_priority_affinity_SMP_Enqueue,
> + _Scheduler_priority_affinity_SMP_Enqueue_scheduled,
> _Scheduler_priority_affinity_SMP_Do_ask_for_help
> );
>
> @@ -574,7 +510,7 @@ void _Scheduler_priority_affinity_SMP_Add_processor(
> context,
> idle,
> _Scheduler_priority_SMP_Has_ready,
> - _Scheduler_priority_affinity_SMP_Enqueue_scheduled_fifo,
> + _Scheduler_priority_affinity_SMP_Enqueue_scheduled,
> _Scheduler_SMP_Do_nothing_register_idle
> );
> }
> @@ -590,7 +526,7 @@ Thread_Control *_Scheduler_priority_affinity_
> SMP_Remove_processor(
> context,
> cpu,
> _Scheduler_priority_SMP_Extract_from_ready,
> - _Scheduler_priority_affinity_SMP_Enqueue_fifo
> + _Scheduler_priority_affinity_SMP_Enqueue
> );
> }
>
> diff --git a/cpukit/score/src/schedulerprioritychangepriority.c
> b/cpukit/score/src/schedulerprioritychangepriority.c
> index eb640fe683..6af475a8d6 100644
> --- a/cpukit/score/src/schedulerprioritychangepriority.c
> +++ b/cpukit/score/src/schedulerprioritychangepriority.c
> @@ -29,8 +29,8 @@ void _Scheduler_priority_Update_priority(
> {
> Scheduler_priority_Context *context;
> Scheduler_priority_Node *the_node;
> - unsigned int priority;
> - bool prepend_it;
> + unsigned int new_priority;
> + unsigned int unmapped_priority;
>
> if ( !_Thread_Is_ready( the_thread ) ) {
> /* Nothing to do */
> @@ -38,10 +38,11 @@ void _Scheduler_priority_Update_priority(
> }
>
> the_node = _Scheduler_priority_Node_downcast( node );
> - priority = (unsigned int )
> - _Scheduler_Node_get_priority( &the_node->Base, &prepend_it );
> + new_priority = (unsigned int)
> + _Scheduler_Node_get_priority( &the_node->Base );
> + unmapped_priority = SCHEDULER_PRIORITY_UNMAP( new_priority );
>
> - if ( priority == the_node->Ready_queue.current_priority ) {
> + if ( unmapped_priority == the_node->Ready_queue.current_priority ) {
> /* Nothing to do */
> return;
> }
> @@ -56,19 +57,19 @@ void _Scheduler_priority_Update_priority(
>
> _Scheduler_priority_Ready_queue_update(
> &the_node->Ready_queue,
> - priority,
> + unmapped_priority,
> &context->Bit_map,
> &context->Ready[ 0 ]
> );
>
> - if ( prepend_it ) {
> - _Scheduler_priority_Ready_queue_enqueue_first(
> + if ( SCHEDULER_PRIORITY_IS_APPEND( new_priority ) ) {
> + _Scheduler_priority_Ready_queue_enqueue(
> &the_thread->Object.Node,
> &the_node->Ready_queue,
> &context->Bit_map
> );
> } else {
> - _Scheduler_priority_Ready_queue_enqueue(
> + _Scheduler_priority_Ready_queue_enqueue_first(
> &the_thread->Object.Node,
> &the_node->Ready_queue,
> &context->Bit_map
> diff --git a/cpukit/score/src/schedulerprioritysmp.c b/cpukit/score/src/
> schedulerprioritysmp.c
> index 071a4218f3..205d3257ca 100644
> --- a/cpukit/score/src/schedulerprioritysmp.c
> +++ b/cpukit/score/src/schedulerprioritysmp.c
> @@ -68,7 +68,7 @@ void _Scheduler_priority_SMP_Node_initialize(
> self = _Scheduler_priority_SMP_Get_self( context );
> _Scheduler_priority_Ready_queue_update(
> &the_node->Ready_queue,
> - priority,
> + SCHEDULER_PRIORITY_UNMAP( priority ),
> &self->Bit_map,
> &self->Ready[ 0 ]
> );
> @@ -109,103 +109,45 @@ void _Scheduler_priority_SMP_Block(
> );
> }
>
> -static bool _Scheduler_priority_SMP_Enqueue_ordered(
> - Scheduler_Context *context,
> - Scheduler_Node *node,
> - Chain_Node_order order,
> - Scheduler_SMP_Insert insert_ready,
> - Scheduler_SMP_Insert insert_scheduled
> +static bool _Scheduler_priority_SMP_Enqueue(
> + Scheduler_Context *context,
> + Scheduler_Node *node,
> + Priority_Control insert_priority
> )
> {
> - return _Scheduler_SMP_Enqueue_ordered(
> + return _Scheduler_SMP_Enqueue(
> context,
> node,
> - order,
> - insert_ready,
> - insert_scheduled,
> + insert_priority,
> + _Scheduler_SMP_Priority_less_equal,
> + _Scheduler_priority_SMP_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_priority_SMP_Move_from_scheduled_to_ready,
> _Scheduler_SMP_Get_lowest_scheduled,
> _Scheduler_SMP_Allocate_processor_lazy
> );
> }
>
> -static bool _Scheduler_priority_SMP_Enqueue_lifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_priority_SMP_Enqueue_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_priority_SMP_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo
> - );
> -}
> -
> -static bool _Scheduler_priority_SMP_Enqueue_fifo(
> +static bool _Scheduler_priority_SMP_Enqueue_scheduled(
> Scheduler_Context *context,
> - Scheduler_Node *node
> + Scheduler_Node *node,
> + Priority_Control insert_priority
> )
> {
> - return _Scheduler_priority_SMP_Enqueue_ordered(
> + return _Scheduler_SMP_Enqueue_scheduled(
> context,
> node,
> - _Scheduler_SMP_Insert_priority_fifo_order,
> - _Scheduler_priority_SMP_Insert_ready_fifo,
> - _Scheduler_SMP_Insert_scheduled_fifo
> - );
> -}
> -
> -static bool _Scheduler_priority_SMP_Enqueue_scheduled_ordered(
> - Scheduler_Context *context,
> - Scheduler_Node *node,
> - Chain_Node_order order,
> - Scheduler_SMP_Insert insert_ready,
> - Scheduler_SMP_Insert insert_scheduled
> -)
> -{
> - return _Scheduler_SMP_Enqueue_scheduled_ordered(
> - context,
> - node,
> - order,
> + insert_priority,
> + _Scheduler_SMP_Priority_less_equal,
> _Scheduler_priority_SMP_Extract_from_ready,
> _Scheduler_priority_SMP_Get_highest_ready,
> - insert_ready,
> - insert_scheduled,
> + _Scheduler_priority_SMP_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_priority_SMP_Move_from_ready_to_scheduled,
> _Scheduler_SMP_Allocate_processor_lazy
> );
> }
>
> -static bool _Scheduler_priority_SMP_Enqueue_scheduled_lifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_priority_SMP_Enqueue_scheduled_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_priority_SMP_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo
> - );
> -}
> -
> -static bool _Scheduler_priority_SMP_Enqueue_scheduled_fifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_priority_SMP_Enqueue_scheduled_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_fifo_order,
> - _Scheduler_priority_SMP_Insert_ready_fifo,
> - _Scheduler_SMP_Insert_scheduled_fifo
> - );
> -}
> -
> void _Scheduler_priority_SMP_Unblock(
> const Scheduler_Control *scheduler,
> Thread_Control *thread,
> @@ -219,7 +161,7 @@ void _Scheduler_priority_SMP_Unblock(
> thread,
> node,
> _Scheduler_priority_SMP_Do_update,
> - _Scheduler_priority_SMP_Enqueue_fifo
> + _Scheduler_priority_SMP_Enqueue
> );
> }
>
> @@ -233,9 +175,9 @@ static bool _Scheduler_priority_SMP_Do_ask_for_help(
> context,
> the_thread,
> node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_priority_SMP_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo,
> + _Scheduler_SMP_Priority_less_equal,
> + _Scheduler_priority_SMP_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_priority_SMP_Move_from_scheduled_to_ready,
> _Scheduler_SMP_Get_lowest_scheduled,
> _Scheduler_SMP_Allocate_processor_lazy
> @@ -256,10 +198,8 @@ void _Scheduler_priority_SMP_Update_priority(
> node,
> _Scheduler_priority_SMP_Extract_from_ready,
> _Scheduler_priority_SMP_Do_update,
> - _Scheduler_priority_SMP_Enqueue_fifo,
> - _Scheduler_priority_SMP_Enqueue_lifo,
> - _Scheduler_priority_SMP_Enqueue_scheduled_fifo,
> - _Scheduler_priority_SMP_Enqueue_scheduled_lifo,
> + _Scheduler_priority_SMP_Enqueue,
> + _Scheduler_priority_SMP_Enqueue_scheduled,
> _Scheduler_priority_SMP_Do_ask_for_help
> );
> }
> @@ -323,7 +263,7 @@ void _Scheduler_priority_SMP_Add_processor(
> context,
> idle,
> _Scheduler_priority_SMP_Has_ready,
> - _Scheduler_priority_SMP_Enqueue_scheduled_fifo,
> + _Scheduler_priority_SMP_Enqueue_scheduled,
> _Scheduler_SMP_Do_nothing_register_idle
> );
> }
> @@ -339,7 +279,7 @@ Thread_Control *_Scheduler_priority_SMP_
> Remove_processor(
> context,
> cpu,
> _Scheduler_priority_SMP_Extract_from_ready,
> - _Scheduler_priority_SMP_Enqueue_fifo
> + _Scheduler_priority_SMP_Enqueue
> );
> }
>
> @@ -356,7 +296,7 @@ void _Scheduler_priority_SMP_Yield(
> thread,
> node,
> _Scheduler_priority_SMP_Extract_from_ready,
> - _Scheduler_priority_SMP_Enqueue_fifo,
> - _Scheduler_priority_SMP_Enqueue_scheduled_fifo
> + _Scheduler_priority_SMP_Enqueue,
> + _Scheduler_priority_SMP_Enqueue_scheduled
> );
> }
> diff --git a/cpukit/score/src/schedulerpriorityunblock.c
> b/cpukit/score/src/schedulerpriorityunblock.c
> index 42ba4de98f..784bc58611 100644
> --- a/cpukit/score/src/schedulerpriorityunblock.c
> +++ b/cpukit/score/src/schedulerpriorityunblock.c
> @@ -31,18 +31,17 @@ void _Scheduler_priority_Unblock (
> Scheduler_priority_Context *context;
> Scheduler_priority_Node *the_node;
> unsigned int priority;
> - bool prepend_it;
> + unsigned int unmapped_priority;
>
> context = _Scheduler_priority_Get_context( scheduler );
> the_node = _Scheduler_priority_Node_downcast( node );
> - priority = (unsigned int )
> - _Scheduler_Node_get_priority( &the_node->Base, &prepend_it );
> - (void) prepend_it;
> + priority = (unsigned int ) _Scheduler_Node_get_priority(
> &the_node->Base );
> + unmapped_priority = SCHEDULER_PRIORITY_UNMAP( priority );
>
> - if ( priority != the_node->Ready_queue.current_priority ) {
> + if ( unmapped_priority != the_node->Ready_queue.current_priority ) {
> _Scheduler_priority_Ready_queue_update(
> &the_node->Ready_queue,
> - priority,
> + unmapped_priority,
> &context->Bit_map,
> &context->Ready[ 0 ]
> );
> diff --git a/cpukit/score/src/schedulersimplechangepriority.c
> b/cpukit/score/src/schedulersimplechangepriority.c
> index 8253a01421..c2c60a5f01 100644
> --- a/cpukit/score/src/schedulersimplechangepriority.c
> +++ b/cpukit/score/src/schedulersimplechangepriority.c
> @@ -28,7 +28,7 @@ void _Scheduler_simple_Update_priority(
> )
> {
> Scheduler_simple_Context *context;
> - bool prepend_it;
> + unsigned int new_priority;
>
> if ( !_Thread_Is_ready( the_thread ) ) {
> /* Nothing to do */
> @@ -36,15 +36,9 @@ void _Scheduler_simple_Update_priority(
> }
>
> context = _Scheduler_simple_Get_context( scheduler );
> - _Scheduler_Node_get_priority( node, &prepend_it );
> + new_priority = (unsigned int ) _Scheduler_Node_get_priority( node );
>
> _Scheduler_simple_Extract( scheduler, the_thread, node );
> -
> - if ( prepend_it ) {
> - _Scheduler_simple_Insert_priority_lifo( &context->Ready, the_thread
> );
> - } else {
> - _Scheduler_simple_Insert_priority_fifo( &context->Ready, the_thread
> );
> - }
> -
> + _Scheduler_simple_Insert( &context->Ready, the_thread, new_priority );
> _Scheduler_simple_Schedule_body( scheduler, the_thread, false );
> }
> diff --git a/cpukit/score/src/schedulersimplesmp.c b/cpukit/score/src/
> schedulersimplesmp.c
> index df08a19eab..4ab4987c3a 100644
> --- a/cpukit/score/src/schedulersimplesmp.c
> +++ b/cpukit/score/src/schedulersimplesmp.c
> @@ -99,17 +99,17 @@ static void _Scheduler_simple_SMP_Move_
> from_scheduled_to_ready(
> )
> {
> Scheduler_simple_SMP_Context *self;
> - Priority_Control priority_to_insert;
> + Priority_Control insert_priority;
>
> self = _Scheduler_simple_SMP_Get_self( context );
> - priority_to_insert = _Scheduler_SMP_Node_priority( scheduled_to_ready );
>
> _Chain_Extract_unprotected( &scheduled_to_ready->Node.Chain );
> + insert_priority = _Scheduler_SMP_Node_priority( scheduled_to_ready );
> _Chain_Insert_ordered_unprotected(
> &self->Ready,
> &scheduled_to_ready->Node.Chain,
> - &priority_to_insert,
> - _Scheduler_SMP_Insert_priority_lifo_order
> + &insert_priority,
> + _Scheduler_SMP_Priority_less_equal
> );
> }
>
> @@ -119,55 +119,36 @@ static void _Scheduler_simple_SMP_Move_
> from_ready_to_scheduled(
> )
> {
> Scheduler_simple_SMP_Context *self;
> - Priority_Control priority_to_insert;
> + Priority_Control insert_priority;
>
> self = _Scheduler_simple_SMP_Get_self( context );
> - priority_to_insert = _Scheduler_SMP_Node_priority( ready_to_scheduled );
>
> _Chain_Extract_unprotected( &ready_to_scheduled->Node.Chain );
> + insert_priority = _Scheduler_SMP_Node_priority( ready_to_scheduled );
> + insert_priority = SCHEDULER_PRIORITY_APPEND( insert_priority );
> _Chain_Insert_ordered_unprotected(
> &self->Base.Scheduled,
> &ready_to_scheduled->Node.Chain,
> - &priority_to_insert,
> - _Scheduler_SMP_Insert_priority_fifo_order
> + &insert_priority,
> + _Scheduler_SMP_Priority_less_equal
> );
> }
>
> -static void _Scheduler_simple_SMP_Insert_ready_lifo(
> +static void _Scheduler_simple_SMP_Insert_ready(
> Scheduler_Context *context,
> - Scheduler_Node *node_to_insert
> + Scheduler_Node *node_to_insert,
> + Priority_Control insert_priority
> )
> {
> Scheduler_simple_SMP_Context *self;
> - Priority_Control priority_to_insert;
>
> self = _Scheduler_simple_SMP_Get_self( context );
> - priority_to_insert = _Scheduler_SMP_Node_priority( node_to_insert );
>
> _Chain_Insert_ordered_unprotected(
> &self->Ready,
> &node_to_insert->Node.Chain,
> - &priority_to_insert,
> - _Scheduler_SMP_Insert_priority_lifo_order
> - );
> -}
> -
> -static void _Scheduler_simple_SMP_Insert_ready_fifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node_to_insert
> -)
> -{
> - Scheduler_simple_SMP_Context *self;
> - Priority_Control priority_to_insert;
> -
> - self = _Scheduler_simple_SMP_Get_self( context );
> - priority_to_insert = _Scheduler_SMP_Node_priority( node_to_insert );
> -
> - _Chain_Insert_ordered_unprotected(
> - &self->Ready,
> - &node_to_insert->Node.Chain,
> - &priority_to_insert,
> - _Scheduler_SMP_Insert_priority_fifo_order
> + &insert_priority,
> + _Scheduler_SMP_Priority_less_equal
> );
> }
>
> @@ -200,103 +181,45 @@ void _Scheduler_simple_SMP_Block(
> );
> }
>
> -static bool _Scheduler_simple_SMP_Enqueue_ordered(
> - Scheduler_Context *context,
> - Scheduler_Node *node,
> - Chain_Node_order order,
> - Scheduler_SMP_Insert insert_ready,
> - Scheduler_SMP_Insert insert_scheduled
> +static bool _Scheduler_simple_SMP_Enqueue(
> + Scheduler_Context *context,
> + Scheduler_Node *node,
> + Priority_Control insert_priority
> )
> {
> - return _Scheduler_SMP_Enqueue_ordered(
> + return _Scheduler_SMP_Enqueue(
> context,
> node,
> - order,
> - insert_ready,
> - insert_scheduled,
> + insert_priority,
> + _Scheduler_SMP_Priority_less_equal,
> + _Scheduler_simple_SMP_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_simple_SMP_Move_from_scheduled_to_ready,
> _Scheduler_SMP_Get_lowest_scheduled,
> _Scheduler_SMP_Allocate_processor_lazy
> );
> }
>
> -static bool _Scheduler_simple_SMP_Enqueue_lifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_simple_SMP_Enqueue_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_simple_SMP_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo
> - );
> -}
> -
> -static bool _Scheduler_simple_SMP_Enqueue_fifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_simple_SMP_Enqueue_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_fifo_order,
> - _Scheduler_simple_SMP_Insert_ready_fifo,
> - _Scheduler_SMP_Insert_scheduled_fifo
> - );
> -}
> -
> -static bool _Scheduler_simple_SMP_Enqueue_scheduled_ordered(
> +static bool _Scheduler_simple_SMP_Enqueue_scheduled(
> Scheduler_Context *context,
> - Scheduler_Node *node,
> - Chain_Node_order order,
> - Scheduler_SMP_Insert insert_ready,
> - Scheduler_SMP_Insert insert_scheduled
> + Scheduler_Node *node,
> + Priority_Control insert_priority
> )
> {
> - return _Scheduler_SMP_Enqueue_scheduled_ordered(
> + return _Scheduler_SMP_Enqueue_scheduled(
> context,
> node,
> - order,
> + insert_priority,
> + _Scheduler_SMP_Priority_less_equal,
> _Scheduler_simple_SMP_Extract_from_ready,
> _Scheduler_simple_SMP_Get_highest_ready,
> - insert_ready,
> - insert_scheduled,
> + _Scheduler_simple_SMP_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_simple_SMP_Move_from_ready_to_scheduled,
> _Scheduler_SMP_Allocate_processor_lazy
> );
> }
>
> -static bool _Scheduler_simple_SMP_Enqueue_scheduled_lifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_simple_SMP_Enqueue_scheduled_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_simple_SMP_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo
> - );
> -}
> -
> -static bool _Scheduler_simple_SMP_Enqueue_scheduled_fifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_simple_SMP_Enqueue_scheduled_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_fifo_order,
> - _Scheduler_simple_SMP_Insert_ready_fifo,
> - _Scheduler_SMP_Insert_scheduled_fifo
> - );
> -}
> -
> void _Scheduler_simple_SMP_Unblock(
> const Scheduler_Control *scheduler,
> Thread_Control *thread,
> @@ -310,7 +233,7 @@ void _Scheduler_simple_SMP_Unblock(
> thread,
> node,
> _Scheduler_simple_SMP_Do_update,
> - _Scheduler_simple_SMP_Enqueue_fifo
> + _Scheduler_simple_SMP_Enqueue
> );
> }
>
> @@ -324,9 +247,9 @@ static bool _Scheduler_simple_SMP_Do_ask_for_help(
> context,
> the_thread,
> node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_simple_SMP_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo,
> + _Scheduler_SMP_Priority_less_equal,
> + _Scheduler_simple_SMP_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_simple_SMP_Move_from_scheduled_to_ready,
> _Scheduler_SMP_Get_lowest_scheduled,
> _Scheduler_SMP_Allocate_processor_lazy
> @@ -347,10 +270,8 @@ void _Scheduler_simple_SMP_Update_priority(
> node,
> _Scheduler_simple_SMP_Extract_from_ready,
> _Scheduler_simple_SMP_Do_update,
> - _Scheduler_simple_SMP_Enqueue_fifo,
> - _Scheduler_simple_SMP_Enqueue_lifo,
> - _Scheduler_simple_SMP_Enqueue_scheduled_fifo,
> - _Scheduler_simple_SMP_Enqueue_scheduled_lifo,
> + _Scheduler_simple_SMP_Enqueue,
> + _Scheduler_simple_SMP_Enqueue_scheduled,
> _Scheduler_simple_SMP_Do_ask_for_help
> );
> }
> @@ -414,7 +335,7 @@ void _Scheduler_simple_SMP_Add_processor(
> context,
> idle,
> _Scheduler_simple_SMP_Has_ready,
> - _Scheduler_simple_SMP_Enqueue_scheduled_fifo,
> + _Scheduler_simple_SMP_Enqueue_scheduled,
> _Scheduler_SMP_Do_nothing_register_idle
> );
> }
> @@ -430,7 +351,7 @@ Thread_Control *_Scheduler_simple_SMP_Remove_
> processor(
> context,
> cpu,
> _Scheduler_simple_SMP_Extract_from_ready,
> - _Scheduler_simple_SMP_Enqueue_fifo
> + _Scheduler_simple_SMP_Enqueue
> );
> }
>
> @@ -447,7 +368,7 @@ void _Scheduler_simple_SMP_Yield(
> thread,
> node,
> _Scheduler_simple_SMP_Extract_from_ready,
> - _Scheduler_simple_SMP_Enqueue_fifo,
> - _Scheduler_simple_SMP_Enqueue_scheduled_fifo
> + _Scheduler_simple_SMP_Enqueue,
> + _Scheduler_simple_SMP_Enqueue_scheduled
> );
> }
> diff --git a/cpukit/score/src/schedulersimpleunblock.c b/cpukit/score/src/
> schedulersimpleunblock.c
> index 5540e20e87..2f5c8636f5 100644
> --- a/cpukit/score/src/schedulersimpleunblock.c
> +++ b/cpukit/score/src/schedulersimpleunblock.c
> @@ -28,13 +28,15 @@ void _Scheduler_simple_Unblock(
> )
> {
> Scheduler_simple_Context *context;
> - Priority_Control priority;
> + unsigned int priority;
> + unsigned int insert_priority;
>
> (void) node;
>
> context = _Scheduler_simple_Get_context( scheduler );
> - _Scheduler_simple_Insert_priority_fifo( &context->Ready, the_thread );
> priority = _Thread_Get_priority( the_thread );
> + insert_priority = SCHEDULER_PRIORITY_APPEND( priority );
> + _Scheduler_simple_Insert( &context->Ready, the_thread, insert_priority
> );
>
> /*
> * If the thread that was unblocked is more important than the heir,
> diff --git a/cpukit/score/src/schedulersimpleyield.c b/cpukit/score/src/
> schedulersimpleyield.c
> index 0c150d8b1f..95f9cd3540 100644
> --- a/cpukit/score/src/schedulersimpleyield.c
> +++ b/cpukit/score/src/schedulersimpleyield.c
> @@ -26,12 +26,16 @@ void _Scheduler_simple_Yield(
> Scheduler_Node *node
> )
> {
> - Scheduler_simple_Context *context =
> - _Scheduler_simple_Get_context( scheduler );
> + Scheduler_simple_Context *context;
> + unsigned int insert_priority;
> +
> + context = _Scheduler_simple_Get_context( scheduler );
>
> (void) node;
>
> _Chain_Extract_unprotected( &the_thread->Object.Node );
> - _Scheduler_simple_Insert_priority_fifo( &context->Ready, the_thread );
> + insert_priority = (unsigned int) _Thread_Get_priority( the_thread );
> + insert_priority = SCHEDULER_PRIORITY_APPEND( insert_priority );
> + _Scheduler_simple_Insert( &context->Ready, the_thread, insert_priority
> );
> _Scheduler_simple_Schedule_body( scheduler, the_thread, false );
> }
> diff --git a/cpukit/score/src/schedulerstrongapa.c b/cpukit/score/src/
> schedulerstrongapa.c
> index 57ffb61367..19d4ebe348 100644
> --- a/cpukit/score/src/schedulerstrongapa.c
> +++ b/cpukit/score/src/schedulerstrongapa.c
> @@ -66,7 +66,7 @@ static void _Scheduler_strong_APA_Move_
> from_ready_to_scheduled(
> {
> Scheduler_strong_APA_Context *self;
> Scheduler_strong_APA_Node *node;
> - Priority_Control priority;
> + Priority_Control insert_priority;
>
> self = _Scheduler_strong_APA_Get_self( context );
> node = _Scheduler_strong_APA_Node_downcast( ready_to_scheduled );
> @@ -76,47 +76,41 @@ static void _Scheduler_strong_APA_Move_
> from_ready_to_scheduled(
> &node->Ready_queue,
> &self->Bit_map
> );
> - priority = node->Base.priority;
> + insert_priority = _Scheduler_SMP_Node_priority( &node->Base.Base );
> + insert_priority = SCHEDULER_PRIORITY_APPEND( insert_priority );
> _Chain_Insert_ordered_unprotected(
> &self->Base.Scheduled,
> &node->Base.Base.Node.Chain,
> - &priority,
> - _Scheduler_SMP_Insert_priority_fifo_order
> + &insert_priority,
> + _Scheduler_SMP_Priority_less_equal
> );
> }
>
> -static void _Scheduler_strong_APA_Insert_ready_lifo(
> +static void _Scheduler_strong_APA_Insert_ready(
> Scheduler_Context *context,
> - Scheduler_Node *the_thread
> -)
> -{
> - Scheduler_strong_APA_Context *self =
> - _Scheduler_strong_APA_Get_self( context );
> - Scheduler_strong_APA_Node *node =
> - _Scheduler_strong_APA_Node_downcast( the_thread );
> -
> - _Scheduler_priority_Ready_queue_enqueue(
> - &node->Base.Base.Node.Chain,
> - &node->Ready_queue,
> - &self->Bit_map
> - );
> -}
> -
> -static void _Scheduler_strong_APA_Insert_ready_fifo(
> - Scheduler_Context *context,
> - Scheduler_Node *the_thread
> + Scheduler_Node *node_base,
> + Priority_Control insert_priority
> )
> {
> - Scheduler_strong_APA_Context *self =
> - _Scheduler_strong_APA_Get_self( context );
> - Scheduler_strong_APA_Node *node =
> - _Scheduler_strong_APA_Node_downcast( the_thread );
> + Scheduler_strong_APA_Context *self;
> + Scheduler_strong_APA_Node *node;
>
> - _Scheduler_priority_Ready_queue_enqueue_first(
> - &node->Base.Base.Node.Chain,
> - &node->Ready_queue,
> - &self->Bit_map
> - );
> + self = _Scheduler_strong_APA_Get_self( context );
> + node = _Scheduler_strong_APA_Node_downcast( node_base );
> +
> + if ( SCHEDULER_PRIORITY_IS_APPEND( insert_priority ) ) {
> + _Scheduler_priority_Ready_queue_enqueue(
> + &node->Base.Base.Node.Chain,
> + &node->Ready_queue,
> + &self->Bit_map
> + );
> + } else {
> + _Scheduler_priority_Ready_queue_enqueue_first(
> + &node->Base.Base.Node.Chain,
> + &node->Ready_queue,
> + &self->Bit_map
> + );
> + }
> }
>
> static void _Scheduler_strong_APA_Extract_from_ready(
> @@ -150,7 +144,7 @@ static void _Scheduler_strong_APA_Do_update(
> _Scheduler_SMP_Node_update_priority( &node->Base, new_priority );
> _Scheduler_priority_Ready_queue_update(
> &node->Ready_queue,
> - new_priority,
> + SCHEDULER_PRIORITY_UNMAP( new_priority ),
> &self->Bit_map,
> &self->Ready[ 0 ]
> );
> @@ -198,7 +192,7 @@ void _Scheduler_strong_APA_Node_initialize(
> self = _Scheduler_strong_APA_Get_self( context );
> _Scheduler_priority_Ready_queue_update(
> &the_node->Ready_queue,
> - priority,
> + SCHEDULER_PRIORITY_UNMAP( priority ),
> &self->Bit_map,
> &self->Ready[ 0 ]
> );
> @@ -247,103 +241,45 @@ void _Scheduler_strong_APA_Block(
> );
> }
>
> -static bool _Scheduler_strong_APA_Enqueue_ordered(
> - Scheduler_Context *context,
> - Scheduler_Node *node,
> - Chain_Node_order order,
> - Scheduler_SMP_Insert insert_ready,
> - Scheduler_SMP_Insert insert_scheduled
> +static bool _Scheduler_strong_APA_Enqueue(
> + Scheduler_Context *context,
> + Scheduler_Node *node,
> + Priority_Control insert_priority
> )
> {
> - return _Scheduler_SMP_Enqueue_ordered(
> + return _Scheduler_SMP_Enqueue(
> context,
> node,
> - order,
> - insert_ready,
> - insert_scheduled,
> + insert_priority,
> + _Scheduler_SMP_Priority_less_equal,
> + _Scheduler_strong_APA_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_strong_APA_Move_from_scheduled_to_ready,
> _Scheduler_SMP_Get_lowest_scheduled,
> _Scheduler_SMP_Allocate_processor_exact
> );
> }
>
> -static bool _Scheduler_strong_APA_Enqueue_lifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_strong_APA_Enqueue_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_strong_APA_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo
> - );
> -}
> -
> -static bool _Scheduler_strong_APA_Enqueue_fifo(
> +static bool _Scheduler_strong_APA_Enqueue_scheduled(
> Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_strong_APA_Enqueue_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_fifo_order,
> - _Scheduler_strong_APA_Insert_ready_fifo,
> - _Scheduler_SMP_Insert_scheduled_fifo
> - );
> -}
> -
> -static bool _Scheduler_strong_APA_Enqueue_scheduled_ordered(
> - Scheduler_Context *context,
> - Scheduler_Node *node,
> - Chain_Node_order order,
> - Scheduler_SMP_Insert insert_ready,
> - Scheduler_SMP_Insert insert_scheduled
> + Scheduler_Node *node,
> + Priority_Control insert_priority
> )
> {
> - return _Scheduler_SMP_Enqueue_scheduled_ordered(
> + return _Scheduler_SMP_Enqueue_scheduled(
> context,
> node,
> - order,
> + insert_priority,
> + _Scheduler_SMP_Priority_less_equal,
> _Scheduler_strong_APA_Extract_from_ready,
> _Scheduler_strong_APA_Get_highest_ready,
> - insert_ready,
> - insert_scheduled,
> + _Scheduler_strong_APA_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_strong_APA_Move_from_ready_to_scheduled,
> _Scheduler_SMP_Allocate_processor_exact
> );
> }
>
> -static bool _Scheduler_strong_APA_Enqueue_scheduled_lifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_strong_APA_Enqueue_scheduled_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_strong_APA_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo
> - );
> -}
> -
> -static bool _Scheduler_strong_APA_Enqueue_scheduled_fifo(
> - Scheduler_Context *context,
> - Scheduler_Node *node
> -)
> -{
> - return _Scheduler_strong_APA_Enqueue_scheduled_ordered(
> - context,
> - node,
> - _Scheduler_SMP_Insert_priority_fifo_order,
> - _Scheduler_strong_APA_Insert_ready_fifo,
> - _Scheduler_SMP_Insert_scheduled_fifo
> - );
> -}
> -
> void _Scheduler_strong_APA_Unblock(
> const Scheduler_Control *scheduler,
> Thread_Control *the_thread,
> @@ -357,7 +293,7 @@ void _Scheduler_strong_APA_Unblock(
> the_thread,
> node,
> _Scheduler_strong_APA_Do_update,
> - _Scheduler_strong_APA_Enqueue_fifo
> + _Scheduler_strong_APA_Enqueue
> );
> }
>
> @@ -371,9 +307,9 @@ static bool _Scheduler_strong_APA_Do_ask_for_help(
> context,
> the_thread,
> node,
> - _Scheduler_SMP_Insert_priority_lifo_order,
> - _Scheduler_strong_APA_Insert_ready_lifo,
> - _Scheduler_SMP_Insert_scheduled_lifo,
> + _Scheduler_SMP_Priority_less_equal,
> + _Scheduler_strong_APA_Insert_ready,
> + _Scheduler_SMP_Insert_scheduled,
> _Scheduler_strong_APA_Move_from_scheduled_to_ready,
> _Scheduler_SMP_Get_lowest_scheduled,
> _Scheduler_SMP_Allocate_processor_lazy
> @@ -394,10 +330,8 @@ void _Scheduler_strong_APA_Update_priority(
> node,
> _Scheduler_strong_APA_Extract_from_ready,
> _Scheduler_strong_APA_Do_update,
> - _Scheduler_strong_APA_Enqueue_fifo,
> - _Scheduler_strong_APA_Enqueue_lifo,
> - _Scheduler_strong_APA_Enqueue_scheduled_fifo,
> - _Scheduler_strong_APA_Enqueue_scheduled_lifo,
> + _Scheduler_strong_APA_Enqueue,
> + _Scheduler_strong_APA_Enqueue_scheduled,
> _Scheduler_strong_APA_Do_ask_for_help
> );
> }
> @@ -461,7 +395,7 @@ void _Scheduler_strong_APA_Add_processor(
> context,
> idle,
> _Scheduler_strong_APA_Has_ready,
> - _Scheduler_strong_APA_Enqueue_scheduled_fifo,
> + _Scheduler_strong_APA_Enqueue_scheduled,
> _Scheduler_SMP_Do_nothing_register_idle
> );
> }
> @@ -477,7 +411,7 @@ Thread_Control *_Scheduler_strong_APA_Remove_
> processor(
> context,
> cpu,
> _Scheduler_strong_APA_Extract_from_ready,
> - _Scheduler_strong_APA_Enqueue_fifo
> + _Scheduler_strong_APA_Enqueue
> );
> }
>
> @@ -494,7 +428,7 @@ void _Scheduler_strong_APA_Yield(
> the_thread,
> node,
> _Scheduler_strong_APA_Extract_from_ready,
> - _Scheduler_strong_APA_Enqueue_fifo,
> - _Scheduler_strong_APA_Enqueue_scheduled_fifo
> + _Scheduler_strong_APA_Enqueue,
> + _Scheduler_strong_APA_Enqueue_scheduled
> );
> }
> diff --git a/testsuites/smptests/smpscheduler01/smpscheduler01.doc
> b/testsuites/smptests/smpscheduler01/smpscheduler01.doc
> index def7dac128..a881fc0fc4 100644
> --- a/testsuites/smptests/smpscheduler01/smpscheduler01.doc
> +++ b/testsuites/smptests/smpscheduler01/smpscheduler01.doc
> @@ -4,7 +4,7 @@ test set name: smpscheduler01
>
> directives:
>
> - - _Scheduler_SMP_Enqueue_ordered()
> + - _Scheduler_SMP_Enqueue()
> - _Scheduler_SMP_Block()
>
> concepts:
> diff --git a/testsuites/sptests/spintrcritical23/init.c
> b/testsuites/sptests/spintrcritical23/init.c
> index c0a159471c..02c8a7ef37 100644
> --- a/testsuites/sptests/spintrcritical23/init.c
> +++ b/testsuites/sptests/spintrcritical23/init.c
> @@ -1,5 +1,5 @@
> /*
> - * Copyright (c) 2015, 2016 embedded brains GmbH. All rights reserved.
> + * Copyright (c) 2015, 2017 embedded brains GmbH. All rights reserved.
> *
> * embedded brains GmbH
> * Dornierstr. 4
> @@ -43,12 +43,15 @@ static void change_priority(rtems_id timer, void *arg)
> /* The arg is NULL */
> test_context *ctx = &ctx_instance;
> rtems_interrupt_lock_context lock_context;
> + unsigned int next_priority;
>
> rtems_interrupt_lock_acquire(&ctx->lock, &lock_context);
> - if (
> - ctx->scheduler_node->Ready_queue.current_priority
> - != ctx->scheduler_node->Base.Priority.value
> - ) {
> +
> + next_priority = SCHEDULER_PRIORITY_UNMAP(
> + (unsigned int) ctx->scheduler_node->Base.Priority.value
> + );
> +
> + if ( ctx->scheduler_node->Ready_queue.current_priority !=
> next_priority ) {
> rtems_task_priority priority_interrupt;
> rtems_task_priority priority_task;
> rtems_task_priority previous;
> @@ -84,11 +87,13 @@ static bool test_body(void *arg)
> rtems_task_priority previous;
>
> rtems_interrupt_lock_acquire(&ctx->lock, &lock_context);
> +
> priority_last = ctx->priority_task;
> priority_task = 1 + (priority_last + 1) % 3;
> priority_interrupt = 1 + (priority_task + 1) % 3;
> ctx->priority_task = priority_task;
> ctx->priority_interrupt = priority_interrupt;
> +
> rtems_interrupt_lock_release(&ctx->lock, &lock_context);
>
> sc = rtems_task_set_priority(
> --
> 2.12.3
>
>
>
> _______________________________________________
> devel mailing list
> devel at rtems.org
> http://lists.rtems.org/mailman/listinfo/devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20171117/49e7f6df/attachment-0002.html>
More information about the devel
mailing list