[RTEMS Project] #2271: Improved Timekeeping
RTEMS trac
trac at rtems.org
Tue Feb 17 15:31:03 UTC 2015
#2271: Improved Timekeeping
-----------------------------+-----------------------------
Reporter: sebastian.huber | Owner: sebastian.huber
Type: enhancement | Status: new
Priority: normal | Milestone: 4.11.1
Component: cpukit | Version: 4.11
Severity: normal | Keywords:
-----------------------------+-----------------------------
= Benefit =
Improved average-case and worst-case performance. Uni-processor
configurations will also benefit from some changes, e.g. the watchdog
insert without restarts and the FreeBSD timecounters.
= Problem Description =
The timekeeping is an important part of an operating system. It includes
* timer services,
* timeout options for operating system operations, and
* time of day services, e.g. timestamps.
Timestamps are frequently used for example by the network stack to manage
timeouts in various network protocols.
On RTEMS the timekeeping is implemented using
* a basic time representation in 64-bit nanoseconds (or struct timespec),
* watchdog delta chains for timer and timeout services,
* a clock tick function {{{rtems_clock_tick()}}}, and
* an optional nanoseconds extension to get the nanoseconds elapsed since
the last clock tick.
== Global Watchdog Delta Chain ==
RTEMS uses two global watchdog delta chains. One for clock tick based
timers and one for seconds based timers. This approach is not scalable,
since each additional processor will add more load to the watchdog handler
making it a bottleneck in the system. Inter-processor interrupts must be
issued to propagate scheduling decisions from the processor serving the
clock tick interrupt to the processor assigned to execute a thread. The
insert operation has O(n) time complexity so it is desirable to have short
watchdog delta chains. Also removal of watchdog controls leads to a
restart of the insert procedure.
== Giant Lock for Watchdog Delta Chain ==
The watchdog handler disables interrupts to protect critical sections.
Since this is insufficient on SMP configurations the complete clock tick
function is executed under Giant lock protection on SMP. Thus the Giant
lock section time depends on the execution time of all timer services in
one clock tick, which can be arbitrarily long and is out of control of the
operating system.
== TOD Lock Contention ==
The time of day (TOD) lock protects the current time of day and the uptime
values (both are in 64-bit nanoseconds). During the
{{{rtems_clock_tick()}}} procedure these values are updated. In
combination with the nanoseconds extension they deliver a time resolution
below the clock tick if supported by the clock driver.
Profiling reveals lock contention on the time of day (TOD) lock. For
example for the test program ''SMPMRSP 1'' on the 200MHz NGMP we have the
following SMP lock profile for the TOD lock:
{{{
#!xml
<SMPLockProfilingReport name="TOD">
<MaxAcquireTime unit="ns">2695</MaxAcquireTime>
<MaxSectionTime unit="ns">1785</MaxSectionTime>
<MeanAcquireTime unit="ns">499</MeanAcquireTime>
<MeanSectionTime unit="ns">734</MeanSectionTime>
<TotalAcquireTime unit="ns">1535529630</TotalAcquireTime>
<TotalSectionTime unit="ns">2254450075</TotalSectionTime>
<UsageCount>3071324</UsageCount>
<ContentionCount initialQueueLength="0">2049757</ContentionCount>
<ContentionCount initialQueueLength="1">1018726</ContentionCount>
<ContentionCount initialQueueLength="2">2840</ContentionCount>
<ContentionCount initialQueueLength="3">1</ContentionCount>
</SMPLockProfilingReport>
}}}
This show that the time of day (TOD) lock is heavily used in this test
program and high contention is visible (users have a 50% chance, that the
lock is not immediately free). It is used on every context switch and is
the last global data structure in the thread dispatch path. This shared
state among all processors is a performance penalty.
== Problem Report #2180 ==
The nanoseconds extension and the standard clock drivers are broken on
SMP, see #2180. The problem is that the time reference points for the
operating system (global TOD structure, containing the time of day and
uptime) and the clock driver (nanoseconds extension) are inconsistent for
a non-zero length time interval during the clock tick procedure.
The software updated reference point for the nanoseconds extension leads
to difficult implementations. Naively written clock drivers usually fail
in the test program ''SPNSEXT 1''.
== Expensive Operation to Convert 64-bit Nanoseconds ==
Converting 64-bit nanoseconds values into the common struct timeval or
struct timespec formats requires a 64-bit division to get the seconds
value. This is a potentially expensive operation depending on the
hardware support (this is the case for SPARC, see {{{__divdi3()}}} in
libgcc.a).
= Problem Solution =
== Watchdog Handler ==
Move the global watchdog state variables
* {{{_Watchdog_Sync_level}}},
* {{{_Watchdog_Sync_count}}},
* {{{_Watchdog_Ticks_chain}}}, and
* {{{_Watchdog_Seconds_chain}}}
into a watchdog context structure and modify the watchdog operations to
use a watchdog context instead of global variables directly.
Add an
[http://www.rtems.org/onlinedocs/doxygen/cpukit/html/group__ClassicINTRLocks.html
interrupt lock] to protect the watchdog state changes.
Replace the watchdog synchronization level and count, because the current
approach does not work on SMP, since the interrupts can happen not only on
the local processor. The watchdog operations are
* watchdog insert (requires a forward iteration of the delta chain, O(n);
due to the possibility of restarts, it has an essentially unbounded
execution time with the current implementation),
* watchdog removal (constant time operation, O(1)), and
* watchdog adjust (requires a forward iteration of the delta chain in the
worst-case, O(n)).
The watchdog synchronization level and count is used to detect the removal
of watchdogs during a watchdog insert procedure. In case a removal is
detected the iteration restarts. This can be avoided using a technique
similar to the SMP lock statistics iteration
({{{SMP_lock_Stats_iteration_context}}}). This would turn all watchdog
operations into worst-case time O(n) operations. For insert and adjust n
is the count of watchdogs in the chain. For removal n is the count of
threads performing an insert operation.
Move the watchdog context into the scheduler context to use one watchdog
context per scheduler instance. Take care that active watchdogs move in
case of a scheduler change of a thread.
== Time of Day ==
Use [http://phk.freebsd.dk/pubs/timecounter.pdf FreeBSD timecounters].
This enables also proper support for the
[http://tools.ietf.org/html/rfc5905 Network Time Protocol (NTP)] and
[http://tools.ietf.org/html/rfc2783 Pulse Per Second (PPS)].
In order to use the timecounters, the platform must provide
* one periodic interval interrupt to trigger {{{rtems_clock_tick()}}}, and
* one free running global counter with a resolution below the clock tick
interval.
This change makes it necessary to touch every clock driver in the RTEMS
sources. There are 40 clock drivers (20 of them with a nanoseconds
extension) using the clock driver shell header file and 23 clock drivers
(3 of them with a nanoseconds extension) with a custom implementation
structure.
This free running global counter is an additional requirement, so it may
be impossible to convert every clock driver. However it is feasible to
adjust the [http://phk.freebsd.dk/pubs/timecounter.pdf FreeBSD
timecounters] implementation which uses ten timehands by default to avoid
this additional requirement. Platforms lacking a free running global
counter can reduce the timehands to one. In this case the periodic timer
used to generate the clock tick interrupt can be used (like in the current
nanoseconds extension).
--
Ticket URL: <http://devel.rtems.org/ticket/2271>
RTEMS Project <http://www.rtems.org/>
RTEMS Project
More information about the bugs
mailing list