Converting stack address to shared-memory object name

Utkarsh Rai utkarsh.rai60 at gmail.com
Thu Jul 9 15:46:15 UTC 2020


On Thu, Jul 9, 2020 at 8:57 PM Gedare Bloom <gedare at rtems.org> wrote:

> On Thu, Jul 9, 2020 at 9:24 AM Gedare Bloom <gedare at rtems.org> wrote:
> >
> > On Wed, Jul 8, 2020 at 10:08 PM Utkarsh Rai <utkarsh.rai60 at gmail.com>
> wrote:
> > >
> > >
> > >
> > >
> > > On Wed, Jul 8, 2020 at 6:56 PM Gedare Bloom <gedare at rtems.org> wrote:
> > >>
> > >> On Wed, Jul 8, 2020 at 6:53 AM Sebastian Huber
> > >> <sebastian.huber at embedded-brains.de> wrote:
> > >> >
> > >> > On 08/07/2020 14:43, Utkarsh Rai wrote:
> > >> >
> > >> > > Hello,
> > >> > > For my GSoC project, I have to provide high-level APIs for sharing
> > >> > > isolated stacks.
> > >> > > The POSIX compliant high-level way of sharing stacks can be to
> create
> > >> > > a shared memory object of the stack to be shared through shm_open
> and
> > >> > > then mmap that to the address space of the current stack. My
> doubt is,
> > >> > > shm_open() takes the path-name of the shared memory object. Since
> this
> > >> > > is a high-level API, how does the user 'convert' the stack
> address to
> > >> > > a shared memory object name?
> > >> > Do we need any POSIX compatibility for this? What would you do in a
> > >> > POSIX environment? You first get some memory, then hand it over to
> > >> > shm_open() to get a file descriptor, then use the file descriptor in
> > >> > mmap(), then use this for pthread_attr_setstack() and whatever?
> > >>
> > >> Yes, but the way to name objects is not set by posix.
> > >>
> > >> We need to provide our own way of translating an address into a name.
> > >>
> > >> > >
> > >> > > Dr.Gedare mentioned that one way to deal with naming would be
> > >> > > something like Mr.Sebastian has been doing with specifications.
> From
> > >> > > what I could gather, it is a hierarchical way of representing
> > >> > > objects(Though, I am not very sure if  I understand this
> accurately).
> > >> > > How can something like this be implemented for naming
> stack-addresses?
> > >> > I am not sure if the specification of RTEMS is helpful in this
> context.
> > >>
> > >> I should have provided a little bit more guidance. I was thinking out
> > >> loud in yesterday's IRC meeting. My thought was more along the lines
> > >> of looking at how UIDs/naming should be done, and that specs had to
> > >> solve a naming problem. However the static nature of specs is not a
> > >> great fit to this problem.
> > >>
> > >> Actually, what is a good model would be something like /proc or
> > >> Linux's sysfs. An IMFS filesystem that exports task information could
> > >> be used to name memory regions. (It could eventually supplant
> > >> task-based statistics reporting too.)
> > >>
> > >> Another idea I had though, which seems to have been lost in the
> > >> shuffle, is to look at how the object names work in RTEMS and see if
> > >> we can add some fixed relationships, e.g., task_name # stack.
> > >>
> > >> I think we should start by just treating the entire task stack as a
> > >> single named object; either it is all shared, or none of it is shared.
> > >> This will be easier to implement and also more widely supported by
> > >> simpler MPU/MMU hardware. Later on, we can consider extending the
> > >> namespace with 'offsets' /taskfs/IDLE/stack/00000A28
> > >> could be a location at byte A28 offset from the start of the stack of
> > >> the IDLE task.
> > >>
> > >
> > > I have a few questions -
> > >
> > > > Users would get the stack address of the stack they want to share
> through pthread_attr_getstack(). Now, when they get the address they want
> to share, they would pass the appropriate name of this memory-region. What
> we have to provide is a mechanism to 'convert' this address to an
> appropriate name. Is this the accepted way or the other way round, i.e. the
> user passes a name as per a specified convention, and that name is
> 'converted' to a specific address?
> > >
> > We may want both to work. You definitely want to have the
> > address->name working though, at the very least with the base address
> > returned by pthread_attr_getstack, but you might also want to be able
> > to map any address in a task's stack to the stack's "name". I'm not
> > sure if that is needed yet, but keep it in mind as a possible
> > extension later to use an address interval instead of a fixed base
> > address.
> >
>
> One more clarification, the "name to address" conversion should be
> done within the shm+mmap implementation. shm takes a name and returns
> a fd, mmap takes an fd and returns an address.
>

Got it.


> > > > When you say "treating the entire task stack as a single named
> object" does it mean that we assign a single name, say "task_stack" to the
> complete stack address space? In that case, how do we deal we the presence
> of multiple tasks that are allocated from the same pool of task stack? I
> understand that on a simpler MPU/MMU hardware it would make sense to
> specify names for each memory section (.txt- "text", .bss - "bss" etc.) but
> in this case,  where we are sharing only selected thread-stacks, I suppose
> we will have to have a way to handle 'offsets' right from the start?
> > >
> >
> > No, I'm thinking one name for each task's stack. If you have 10 tasks,
> > you'd have 10 names.


Ok, so that means we can have naming like this -
For an idle thread stack address we have ->/taskfs/IDLE, for a POSIX thread
address we have -> /taskfs/"thread_id". Where we maintain a table for the
name and its corresponding address?

>
> > Each allocated task stack is logically a separate region within the
> > pool. For simple MPU hardware, it may not be possible to share
> > arbitrary task stacks, but in that case the implementation can just
> > ignore the name and share the entire pool if that is preferred, or
> > return an error. (The behavior could be configurable, maybe.)
> >
> > >>
> > >> Gedare
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20200709/d743f82b/attachment-0001.html>


More information about the devel mailing list