rtems_message_queue_receive / rtems_event_receive issues
Catalin Demergian
demergian at gmail.com
Thu Oct 25 11:03:52 UTC 2018
This is really strange. If you use cpsid/cpsie around the append_cnt ++
and --, then append_cnt should never be > 1. If this really the case,
then this looks like a processor bug.
-> No, after I saw that it didn't fix the problem I commented the dis/en,
so
the value 2 was obtained without the dis/en in the code.
On Thu, Oct 25, 2018 at 12:17 PM Sebastian Huber <
sebastian.huber at embedded-brains.de> wrote:
> On 25/10/2018 11:00, Catalin Demergian wrote:
> > Hi,
> > First, I would like to conceptually understand how a function as
> > simple as _Chain_Append_unprotected could fail.
>
> The chain operations fail if you try to append a node that is already on
> a chain or extract a node which is not on a chain.
>
> > I added a patch like this
> > RTEMS_INLINE_ROUTINE void _Chain_Append_unprotected(
> > Chain_Control *the_chain,
> > Chain_Node *the_node
> > )
> > {
> > append_cnt++;
> > if(append_cnt > append_cnt_max)
> > append_cnt_max = append_cnt;
> > Chain_Node *tail = _Chain_Tail( the_chain );
> > Chain_Node *old_last = tail->previous;
> >
> > the_node->next = tail;
> > tail->previous = the_node;
> > old_last->next = the_node;
> > the_node->previous = old_last;
> > append_cnt--;
> > }
> >
> > I could see append_cnt_max=2 in the output bellow, meaning at some
> > point there were two function calls
> > in progress (don't know if it was for the same chain, as the patch
> > dind't look if it's the_chain parameter is the same)
> > How would be possible as a scenario to have two calls in the same time ?
>
> Some chain operations are by a mutex and not interrupt disable/enable.
>
> >
> > I have to say that I even tried to temporary dis/en interrupts with
> > __asm__ volatile ("cpsid i" : : : "memory") /__asm__ volatile ("cpsie
> > i" : : : "memory")
> > , but I could still reproduce. plus I also reproduced it
> > with append_errs=0 .. I hope this doesn't break my theory :D
>
> This is really strange. If you use cpsid/cpsie around the append_cnt ++
> and --, then append_cnt should never be > 1. If this really the case,
> then this looks like a processor bug.
>
> >
> > [/] # i
> > Instruction count for the last second is 215992979.
> > CPU load is 99.99%.
> > intr_cnt=3220
> > cond1=1
> > cond2=1
> > jiffies=1622210
> > dbg_ready_UI1=287402
> > dbg_ready_LOGT=374072
> > dbg_ready_ntwk=1622094084077
> > dbg_ready_SCtx=1621678898952
> > dbg_ready_SCrx=1621678862701
> > dbg_ready_SHLL=1622201084177
> > dbg_extract_UI1=67127037
> > dbg_extract_LOGT=67144458
> > dbg_extract_ntwk=1622094096292
> > dbg_extract_SCtx=1621678924213
> > dbg_extract_SCrx=1621678883552
> > dbg_extract_SHLL=1622200088846
> > append_errs=0
> > ready_queue_100_elems=1
> > append_cnt_max=2
> > [/] #
> > [/] #
> > [/] #
> > [/] #
> > [/] # assertion "first != _Chain_Tail( &ready_queues[ index ] )"
> > failed: file
> >
> "../../cpukit/../../../stm32f7/lib/include/rtems/score/schedulerpriorityimpl.h",
>
> > line 232, function: _Scheduler_priority_Ready_queue_first
> >
> > *** PROFILING REPORT BEGIN PMC_APP ***
> > <ProfilingReport name="PMC_APP">
> > <PerCPUProfilingReport processorIndex="0">
> > <MaxThreadDispatchDisabledTime
> > unit="ns">2</MaxThreadDispatchDisabledTime>
> > <MeanThreadDispatchDisabledTime
> > unit="ns">1</MeanThreadDispatchDisabledTime>
> > <TotalThreadDispatchDisabledTime
> > unit="ns">3369295</TotalThreadDispatchDisabledTime>
> > <ThreadDispatchDisabledCount>3369216</ThreadDispatchDisabledCount>
> > <MaxInterruptDelay unit="ns">0</MaxInterruptDelay>
> > <MaxInterruptTime unit="ns">0</MaxInterruptTime>
> > <MeanInterruptTime unit="ns">0</MeanInterruptTime>
> > <TotalInterruptTime unit="ns">0</TotalInterruptTime>
> > <InterruptCount>0</InterruptCount>
> > </PerCPUProfilingReport>
> > </ProfilingReport>
> > *** PROFILING REPORT END PMC_APP ***
> > Creating /etc/passwd and group with three usable accounts
> > root/pwd , test/pwd, rtems/NO PASSWORD, chroot/NO PASSWORD
> >
> > I may have to begin the integration again for 4.11.3 ... are there any
> > chances this might not reproduce in 4.11.3 ?
> > are there any changes in this area ?
>
> I don't think an update to 4.11.3 will solve your problem.
>
> --
> Sebastian Huber, embedded brains GmbH
>
> Address : Dornierstr. 4, D-82178 Puchheim, Germany
> Phone : +49 89 189 47 41-16
> Fax : +49 89 189 47 41-09
> E-Mail : sebastian.huber at embedded-brains.de
> PGP : Public key available on request.
>
> Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/users/attachments/20181025/879e25fe/attachment-0002.html>
More information about the users
mailing list