Interrupt latency in RTEMS (Zedboard)
BRIARD Sebastien
sebastien.briard at thalesaleniaspace.com
Wed Mar 28 06:23:19 UTC 2018
Well, Thank you very much for these answers !
Sébastien.
-----Message d'origine-----
De : Chris Johns [mailto:chrisj at rtems.org]
Envoyé : mercredi 28 mars 2018 02:12
À : joel at rtems.org; BRIARD Sebastien
Cc : users at rtems.org
Objet : Re: Interrupt latency in RTEMS (Zedboard)
On 28/03/2018 01:28, Joel Sherrill wrote:
> On Tue, Mar 27, 2018 at 8:05 AM, BRIARD Sebastien
> <sebastien.briard at thalesaleniaspace.com
> <mailto:sebastien.briard at thalesaleniaspace.com>> wrote:
>
> Okay, I realized how I was confusing clock tick (for timekeeping) and
> interrupt latency and why the result were not the ones I expected.
> I still have one question, is there a macro to choose/impose the processor
> frequency ?
>
> If you mean the actual clock rate of the CPU, then it would be a BSP
> specific option and it would have to match the hardware. All the BSP
> options for this BSP are in
> c/src/lib/libbsp/arm/xilinx_zynq/configure.ac <http://configure.ac>.
The frequency for the Cortex-A9 is exported in the Xilinx headers created when you export the design in Vivado. You need a way to import those defines into your code.
A few of the clock related set ups in this BSP use weak symbols so you can provide a version in your code that matches the frequency your designers have set up in Vivado.
> For the clock tick length, you have to balance overloading the CPU
> with clock tick interrupts on one extreme and not advancing time on
> the other. Your 1 microsecond per tick is dangerously close to the no
> forward progress end. At some point, you have to consider that it
> takes a minimum number of instructions to process the interrupt and from that a minimum length of time.
>
> At near 1 Mhz, you can probably handle the 1 usec clock ticks alone
> but there won't be much time left to do real work. I experimented
> years ago on a 400 Mhz embedded PowerPC and you could go very low on
> the clock tick but it was clear from the test code I was running that
> you lost more and more non-interrupt processing time to do application
> processing. So as the interrupt rate went up, the overall application throughput went down.
Looking at one interrupt on a Zynq I have at hand I am seeing a minimum time of
1040 nsecs and a max of 3597 nsecs. This is the time from raising the request, getting into the interrupt and acknowledging it in a PL register. These times are measured in the PL so no software or related overheads involved.
Chris
More information about the users
mailing list