Thread 2: Need help in understanding exisiting RTEMS code
Richi Dubey
richidubey at gmail.com
Sat Aug 1 07:45:57 UTC 2020
I understand. Thank you.
On Fri, Jul 31, 2020 at 9:00 PM Gedare Bloom <gedare at rtems.org> wrote:
> On Fri, Jul 31, 2020 at 5:48 AM Richi Dubey <richidubey at gmail.com> wrote:
> >
> > Thank you for your answer. I learned ack today and it is coming pretty
> handy along with cscope.
> >
> Note that 'grep' is more reliable, but takes a bit more time to run.
> 'ack' filters the files that it checks, so it can miss things
> sometimes.
>
> > On Thu, Jul 30, 2020 at 9:32 PM Gedare Bloom <gedare at rtems.org> wrote:
> >>
> >> On Thu, Jul 30, 2020 at 7:30 AM Richi Dubey <richidubey at gmail.com>
> wrote:
> >> >
> >> > I request someone to help me with my earlier question:
> https://lists.rtems.org/pipermail/devel/2020-July/060615.html since I may
> reuse this logic of variable-sized arrays.
> >> >
> >>
> >> I guess I'll answer here..
> >>
> >> rtems$ ack Scheduler_EDF_SMP_Context
> >> cpukit/score/src/scheduleredfsmp.c
> >> 24:static inline Scheduler_EDF_SMP_Context *
> >> 27: return (Scheduler_EDF_SMP_Context *) _Scheduler_Get_context(
> scheduler );
> >> 30:static inline Scheduler_EDF_SMP_Context *
> >> 33: return (Scheduler_EDF_SMP_Context *) context;
> >> 63: Scheduler_EDF_SMP_Context *self =
> >> 100: Scheduler_EDF_SMP_Context *self = _Scheduler_EDF_SMP_Get_self(
> context );
> >> 121: Scheduler_EDF_SMP_Context *self,
> >> 143: Scheduler_EDF_SMP_Context *self;
> >> 192: Scheduler_EDF_SMP_Context *self,
> >> 201: const Scheduler_EDF_SMP_Context *self,
> >> 220: Scheduler_EDF_SMP_Context *self;
> >> 241: Scheduler_EDF_SMP_Context *self;
> >> 284: Scheduler_EDF_SMP_Context *self;
> >> 307: Scheduler_EDF_SMP_Context *self;
> >> 370: Scheduler_EDF_SMP_Context *self;
> >> 586: Scheduler_EDF_SMP_Context *self;
> >>
> >> cpukit/include/rtems/score/scheduleredfsmp.h
> >> 106:} Scheduler_EDF_SMP_Context;
> >>
> >> cpukit/include/rtems/scheduler.h
> >> 133: Scheduler_EDF_SMP_Context Base; \
> >>
> >> That last one is part of an allocation.
> >>
> >> > Thank you.
> >> >
> >> > On Sat, Jul 18, 2020 at 6:29 PM Richi Dubey <richidubey at gmail.com>
> wrote:
> >> >>
> >> >> This information helps. Thank you.
> >> >>
> >> >> On Fri, Jul 17, 2020 at 6:31 PM Sebastian Huber
> >> >> <sebastian.huber at embedded-brains.de> wrote:
> >> >> >
> >> >> > On 17/07/2020 14:22, Richi Dubey wrote:
> >> >> >
> >> >> > > I found the line in the documentation: "Since the processor
> assignment
> >> >> > > is independent of the thread priority the processor indices may
> move
> >> >> > > from one state to the other."
> >> >> > >
> >> >> > > This is true because the processor assignment is done by the
> scheduler
> >> >> > > and it gets to choose whether to allocate the highest priority
> thread
> >> >> > > or not. Right? So if it wants to allocate processor to the lowest
> >> >> > > priority (max. priority number) thread, it can do so?
> >> >> > Yes, the scheduler can use whatever criteria it wants to allocate a
> >> >> > processor to the threads is manages.
> >> >> > >
> >> >> > > How is the priority of a node different from the priority of its
> >> >> > > thread? How do these two priorities relate to each other?
> >> >> > A thread has not only one priority. It has at least one priority
> per
> >> >> > scheduler instance. With the locking protocols it may also inherit
> >> >> > priorities of other threads. A thread has a list of trees of trees
> of
> >> >> > priorities.
> >> >
> >> > _______________________________________________
> >> > devel mailing list
> >> > devel at rtems.org
> >> > http://lists.rtems.org/mailman/listinfo/devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20200801/5f0c2e3c/attachment.html>
More information about the devel
mailing list