[PATCH rtems-libbsd 3/7] sys/kern: Add lockmgr support

Gedare Bloom gedare at rtems.org
Wed Jul 28 23:42:33 UTC 2021


On Wed, Jul 28, 2021 at 5:05 PM Chris Johns <chrisj at rtems.org> wrote:
>
> On 29/7/21 12:03 am, Gedare Bloom wrote:
> > On Tue, Jul 27, 2021 at 2:59 AM <chrisj at rtems.org> wrote:
> >> From: Chris Johns <chrisj at rtems.org>
> >>
> >>  struct lock_object {
> >> -#ifndef __rtems__
> >>         const   char *lo_name;          /* Individual lock name. */
> >>         u_int   lo_flags;
> >>         u_int   lo_data;                /* General class specific data. */
> >> +#ifndef __rtems__
> >>         struct  witness *lo_witness;    /* Data for witness. */
> >> -#else /* __rtems__ */
> >> -       unsigned int lo_flags;
> >> +#endif /* __rtems__ */
> >> +#ifdef __rtems__
> > Can use
> > #else /* __rtems__ */
> > instead of endif+ifdef
>
> I will see how this ends up after looking at the fields Sebastian wants removed.
>
> >> +static void
> >> +lock_lockmgr(struct lock_object *lock, uintptr_t how)
> >> +{
> >> +  panic("lockmgr locks do not support sleep interlocking");
> >> +}
> >> +
> >> +static uintptr_t
> >> +unlock_lockmgr(struct lock_object *lock)
> >> +{
> >> +  panic("lockmgr locks do not support sleep interlocking");
> >> +}
> >> +
> > Should there be different messages for the two cases here?
>
> Sure. Either case means something is being used by new ported code and these
> calls need to be looked into.
>
> >
> >> +static struct thread *
> >> +lockmgr_xholder(const struct lock *lk)
> >> +{
> >> +  uintptr_t x;
> >> +  x = lk->lk_lock;
> >> +  if ((x & LK_SHARE))
> >> +    return NULL;
> >> +  return rtems_bsd_get_thread(lk->lock_object.lo_mtx.queue.Queue.owner);
> >> +}
> >> +
> >> +static void
> >> +lockmgr_exit(u_int flags, struct lock_object *ilk, int wakeup_swapper)
> > any good reason to use ilk instead of lk as elsewhere?
>
> None. I copy the decl from FreeBSD ...
>
> freebsd/sys/kern/kern_lock.c:lockmgr_exit(u_int flags, struct lock_object *ilk,
> int wakeup_swapper)
>
> >> +static int
> >> +lockmgr_slock_hard(struct lock *lk, u_int flags, struct lock_object *ilk,
> >> +                  const char *file, int line)
> >> +{
> >> +  uintptr_t x;
> >> +  int error = 0;
> >> +  lockmgr_trace("slock", 'I', lk);
> >> +  rtems_bsd_mutex_lock(&lk->lock_object);
> >> +  x = lk->lk_lock;
> >> +  atomic_store_rel_ptr(&lk->lk_lock, x + LK_ONE_SHARER);
> >> +  if (rtems_bsd_mutex_recursed(&lk->lock_object))
> >> +    lk->lk_recurse++;
> >> +  else
> >> +    lk->lk_recurse = 0;
> >> +  lockmgr_trace("slock", 'O', lk);
> >> +  LOCK_LOG_LOCK("SLOCK", &lk->lock_object, 0, lk->lk_recurse, file, line);
> >> +  lockmgr_exit(flags, ilk, 0);
> >> +  return error;
> > this is always 0 (successful)?
>
> Yeap.
>
> >> +static int
> >> +lockmgr_upgrade(struct lock *lk, u_int flags, struct lock_object *ilk,
> >> +               const char *file, int line)
> >> +{
> >> +  uintptr_t x, v;
> >> +  int error = 0;
> >> +  LOCK_LOG_LOCK("XUPGRADE", &lk->lock_object, 0, 0, file, line);
> >> +  lockmgr_trace("xupgrade", 'I', lk);
> >> +  v = lk->lk_lock;
> >> +  x = v & ~LK_SHARE;
> >> +  atomic_store_rel_ptr(&lk->lk_lock, x);
> >
> > I think this function is to upgrade a shared (waiters/spinner) lock to
> > non-shared (spinner) lock? I'm not confident about correctness here.
> > It looks to me like there can be a race condition if multiple waiters
> > attempt to upgrade in parallel. It works by a preemption after the
> > first waiter loads `v = lk->lk_lock` and then another waiter loads
> > `v`. Actually, there doesn't seem to be anything that prevents two
> > separate calls to upgrade the lock by different waiters in sequence
> > either. I could be wrong (of course).
>
> We do not support shared locks and we do not share specific paths through locks
> so while the code may request a shared lock it is not shared.
>
ok that is helpful.

> You could be correct and I welcome you reviewing the code in the FB kernel this
> is based on.
>
I took a quick look. the FB kernel uses spinning and
atomic_fcmpset_ptr instead, which I think exactly addresses this
problem.

> >> +int
> >> +lockstatus(const struct lock *lk)
> >> +{
> >> +  uintptr_t v, x;
> >> +  int ret;
> >> +
> >> +  ret = LK_SHARED;
> >> +  x = lk->lk_lock;
> >> +
> >> +  if ((x & LK_SHARE) == 0) {
> >> +    v = rtems_bsd_mutex_owned(RTEMS_DECONST(struct lock_object*, &lk->lock_object));
> >> +    if (v)
> >> +      ret = LK_EXCLUSIVE;
> >> +    else
> >> +      ret = LK_EXCLOTHER;
> >> +  } else if (x == LK_UNLOCKED) {
> >> +    ret = 0;
> >> +  }
> >
> > there's no else {} here, is it possible that (x & LK_SHARE) != 0, and
> > (x != LK_UNLOCKED)? If so, ret may be returned without initialization?
> >
> > Maybe just initialize ret = 0.
>
> Yes and thanks.
>
> Chris
>


More information about the devel mailing list