Fwd: [PATCH 1/2] score: Add and use Watchdog_Clock for clock types.

Sebastian Huber sebastian.huber at embedded-brains.de
Wed Jun 22 13:43:44 UTC 2016



On 21/06/16 16:22, Gedare Bloom wrote:
> On Tue, Jun 21, 2016 at 1:48 AM, Sebastian Huber
> <sebastian.huber at embedded-brains.de>  wrote:
>> >
>> >
>> >On 16/06/16 08:09, Sebastian Huber wrote:
>>>> >>>
>>>> >>>diff --git a/cpukit/score/include/rtems/score/threadqimpl.h
>>>> >>>b/cpukit/score/include/rtems/score/threadqimpl.h
>>>> >>>index 73d4de2..f59a334 100644
>>>> >>>--- a/cpukit/score/include/rtems/score/threadqimpl.h
>>>> >>>+++ b/cpukit/score/include/rtems/score/threadqimpl.h
>>>> >>>@@ -349,6 +349,7 @@ Thread_Control *_Thread_queue_Do_dequeue(
>>>> >>>    *       executing,
>>>> >>>    *       STATES_WAITING_FOR_MUTEX,
>>>> >>>    *       WATCHDOG_NO_TIMEOUT,
>>>> >>>+ *       WATCHDOG_RELATIVE,
>>>> >>>    *       0,
>>>> >>>    *       &queue_context
>>>> >>>    *     );
>>>> >>>@@ -362,6 +363,7 @@ Thread_Control *_Thread_queue_Do_dequeue(
>>>> >>>    * @param[in] state The new state of the thread.
>>>> >>>    * @param[in] timeout Interval to wait.  Use WATCHDOG_NO_TIMEOUT to
>>>> >>>block
>>>> >>>    * potentially forever.
>>>> >>>+ * @param[in] clock The kind of clock used for the interval.
>>>> >>>    * @param[in] queue_context The thread queue context of the lock
>>>> >>>acquire.
>>>> >>>    */
>>>> >>>   void _Thread_queue_Enqueue_critical(
>>>> >>>@@ -370,6 +372,7 @@ void _Thread_queue_Enqueue_critical(
>>>> >>>     Thread_Control                *the_thread,
>>>> >>>     States_Control                 state,
>>>> >>>     Watchdog_Interval              timeout,
>>>> >>>+  Watchdog_Clock                 clock,
>>>> >>>     Thread_queue_Context          *queue_context
>>>> >>>   );
>>> >>
>>> >>
>>> >>We already have a lot of parameters for the
>>> >>_Thread_queue_Enqueue_critical(). I would rather move the timeout parameters
>>> >>to the queue_context, since this is more efficient.
>>> >>
>>> >>The Watchdog_Interval is 32-bit, thus not suitable for the absolute
>>> >>timeouts.
>>> >>
>>> >>A value of 0 is valid for absolute timeouts, see what happens if you get
>>> >>that wrong
>>> >>
>>> >>
>>> >>https://www.wired.com/2016/02/dont-set-your-iphone-back-to-1970-no-matter-what/
>>> >>
>>> >>So, I would move the WATCHDOG_NO_TIMEOUT to your new Watchdog_Clock.
>>> >>
>>> >>[...]
>> >
>> >
>> > From IRC log:
>> >
>> >[19:44] <gedare> sebhub: Does it make sense to you to redefine
>> >Watchdog_Interval as a struct containing a union for the interval and an
>> >enum for the Clock "Discipline" (relative/absolute)?
>> >[19:45] <gedare> ^the union for uint32_t relative, uint64_t absolute--or
>> >else we make all intervals 64-bit
>> >
>> >No, I would not change Watchdog_Interval. We should keep the ticks based
>> >services as is. The problem is that _Thread_queue_Enqueue_critical() doesn't
>> >support absolute timeouts currently. I would do something like the attached.
>> >
>> >We have
>> >
>> >void _Thread_queue_Enqueue_critical(
>> >   Thread_queue_Queue            *queue,
>> >   const Thread_queue_Operations *operations,
>> >   Thread_Control                *the_thread,
>> >   States_Control                 state,
>> >   Thread_queue_Context          *queue_context
>> >)
>> >{
>> >   Per_CPU_Control *cpu_self;
>> >   bool             success;
>> >
>> >#if defined(RTEMS_MULTIPROCESSING)
>> >   if ( _Thread_MP_Is_receive( the_thread ) && the_thread->receive_packet ) {
>> >     the_thread = _Thread_MP_Allocate_proxy( state );
>> >   }
>> >#endif
>> >
>> >   _Thread_Lock_set( the_thread, &queue->Lock );
>> >
>> >   the_thread->Wait.return_code = STATUS_SUCCESSFUL;
>> >   _Thread_Wait_set_queue( the_thread, queue );
>> >   _Thread_Wait_set_operations( the_thread, operations );
>> >
>> >   ( *operations->enqueue )( queue, the_thread );
>> >
>> >Thus, we would have to store the timeout related parameters to the stack
>> >anyway, since ( *operations->enqueue )( queue, the_thread ) is a function
>> >call which cannot be optimized way (except maybe through link-time
>> >optimization). So, there is no overhead due to the use of the queue context.
>> >
> Thanks, that makes sense. I mocked together the changes to put the
> timeout parameters into the context. I will work on adding the little
> bit needed to use 64-bit intervals for absolute.

Maybe we need a new typedef for the 64-bit intervals.

>
> Do you have a preference between "Watchdog_Discipline" and
> "Watchdog_Clock"? I think Discipline may be nicer.

Watchdog_Discipline sounds better.


>
> I did not put the initialization of the timeout parameters to a
> default, instead I made each producer of a threadq_enqueue to
> explicitly set the parameters.

Yes, this is better, since this timeout stuff is only interesting for 
the enqueue. We should however initialize all fields in case of RTEMS_DEBUG.

> Perhaps a sane default of NO_TIMEOUT is
> good, and will reduce the size of my patch.
>
> Tomorrow I'll work on cleaning this up some more. Wrapping the
> WATCHDOG_NO_TIMEOUT in the Watchdog_Discipline makes the structure no
> longer match with the Per_CPU_Watchdog_index values. So a little bit
> of massaging has to be done there.

I think it is not necessary that we need a defined relationship with 
Per_CPU_Watchdog_index due to:

+  switch ( queue_context->timeout_discipline ) {
+    case WATCHDOG_RELATIVE:
+      _Thread_Timer_insert_relative(
+        the_thread,
+        cpu_self,
+        _Thread_Timeout,
+        (Watchdog_Interval) queue_context->timeout_interval
+      );
+      break;
+    case WATCHDOG_ABSOLUTE:
+      _Thread_Timer_insert_absolute(
+        the_thread,
+        cpu_self,
+        _Thread_Timeout,
+        queue_context->timeout_interval
+      );
+      break;
+    default:
+      break;
    }


-- 
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax     : +49 89 189 47 41-09
E-Mail  : sebastian.huber at embedded-brains.de
PGP     : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.




More information about the devel mailing list