[PATCH v1] bsps/riscv: Give enough time for clock driver initialization
Gedare Bloom
gedare at rtems.org
Wed Sep 15 15:30:22 UTC 2021
On Thu, Sep 9, 2021 at 2:39 AM <Jan.Sommer at dlr.de> wrote:
>
> Sorry, for digging out this old patch.
>
> > On Mon, Jun 7, 2021 at 1:57 PM <Jan.Sommer at dlr.de> wrote:
> >
> >
> > > -----Original Message-----
> > > From: Gedare Bloom <gedare at rtems.org>
> > > Sent: Monday, June 7, 2021 7:00 PM
> > > To: Sommer, Jan <Jan.Sommer at dlr.de>
> > > Cc: devel at rtems.org
> > > Subject: Re: [PATCH v1] bsps/riscv: Give enough time for clock driver
> > > initialization
> > >
> > > On Mon, Jun 7, 2021 at 9:47 AM Jan Sommer <jan.sommer at dlr.de> wrote:
> > > >
> > > > - Clock driver initialization for secondary cores had to take less
> > > > than one tick
> > > > - If tick time is small (i.e. <= 1ms) setting up all cores could take
> > > > too long and a fatal error is thrown.
> > > > - Give at least 10 ms time for clock initialization to avoid this
> > > > error
> > >
> > > Is there a reason to pick 10?
> > >
> >
> > In qemu I (coarsely) measured 1.5ms for 8 cores.
> > So I thought this should add enough buffer to prevent a fatal error.
> > I probably could also reduce it to 5 ms.
> >
> > > I assume this blocks/idles the system until the interval elapses, so it would be
> > > good to minimize waste (subject to Joel's noted rant about premature
> > > optimization).
> > >
> >
> > No. I'm more worried about boot time. :)
> >
>
> Would a few milliseconds be acceptable?
>
Is there any way to poll?
I'm not totally clear on the boot vs secondary processor
initialization sequence, and how closely they need to synchronize at
this stage.
What hardware do you have this problem with?
> >
> >
> > If you take a look at the clock initialization of the secondary cores (https://git.rtems.org/rtems/tree/bsps/riscv/riscv/clock/clockdrv.c#n178):
> >
> > _SMP_Othercast_action(riscv_clock_secondary_action, &cmpval);
> >
> > if (cmpval - riscv_clock_read_mtime(&clint->mtime) >= interval) {
> > bsp_fatal(RISCV_FATAL_CLOCK_SMP_INIT);
> > }
> >
> > It will put the first clock tick 10ms into the future (instead of just one tick), but it won't block the system initialization.
> > It only prevents entering the if condition by having a large enough value for interval, but the runtime of the clock initialization is the same.
> >
> > How does this impact the timeline for boot to first application thread?
> >
> > Is there a period where the system is up but only one core?
> >
> > Any other oddities I might be missing?
> >
>
> The main effect this has is that the interrupt of the first tick is delayed by a few milliseconds, so that there is enough time for the timers of all cores to initialize, even for configurations with short tick intervals.
> This should have a similar behavior as if the tick time itself would be long (i.e. multiple milliseconds).
>
> Maybe, an alternative solution would be to do this in the "riscv_clock_secondary_initialization".
> If the condition there (see below) fails, try again with a longer time interval for the clock initialization and only fail if that didn't help either.
> Then, it would not affect the situation where everything works fine right now and just add a small penalty to the setups where it currently fails.
> Would this be acceptable?
>
> 178 static void riscv_clock_secondary_initialization(
> 179 volatile RISCV_CLINT_regs *clint,
> 180 uint64_t cmpval,
> 181 uint32_t interval
> 182 )
> 183 {
> 184 #if defined(RTEMS_SMP) && !defined(CLOCK_DRIVER_USE_ONLY_BOOT_PROCESSOR)
> 185 _SMP_Othercast_action(riscv_clock_secondary_action, &cmpval);
> 186
> 187 if (cmpval - riscv_clock_read_mtime(&clint->mtime) >= interval) {
> 188 bsp_fatal(RISCV_FATAL_CLOCK_SMP_INIT);
> 189 }
> 190 #endif
> 191 }
>
> Best regards,
>
> Jan
>
> > > > ---
> > > > bsps/riscv/riscv/clock/clockdrv.c | 8 +++++++-
> > > > 1 file changed, 7 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/bsps/riscv/riscv/clock/clockdrv.c
> > > > b/bsps/riscv/riscv/clock/clockdrv.c
> > > > index 3afe86576f..102137aeab 100644
> > > > --- a/bsps/riscv/riscv/clock/clockdrv.c
> > > > +++ b/bsps/riscv/riscv/clock/clockdrv.c
> > > > @@ -211,7 +211,13 @@ static void riscv_clock_initialize(void)
> > > > tc->interval = interval;
> > > >
> > > > cmpval = riscv_clock_read_mtime(&clint->mtime);
> > > > - cmpval += interval;
> > > > + /*
> > > > + * For very short intervals the time of 1 tick is not enough to
> > > > + * set up the timer on all cores in SMP systems.
> > > > + * Give the CPU at least 10 ms.
> > > > + */
> > > > + interval = (10000 / us_per_tick) * interval; cmpval += interval;
> > > >
> > > > riscv_clock_clint_init(clint, cmpval, 0);
> > > > riscv_clock_secondary_initialization(clint, cmpval, interval);
> > > > --
> > > > 2.17.1
> > > >
> > > > _______________________________________________
> > > > devel mailing list
> > > > devel at rtems.org
> > > > http://lists.rtems.org/mailman/listinfo/devel
> > _______________________________________________
> > devel mailing list
> > devel at rtems.org
> > http://lists.rtems.org/mailman/listinfo/devel
More information about the devel
mailing list