Using AMP in Rtems 4.10, issues using global message queues
Sebastian Huber
sebastian.huber at embedded-brains.de
Fri Dec 14 06:31:21 UTC 2018
Hello Diego,
On 13/12/2018 13:31, Diego Mercado wrote:
> Hello,
> I am using AMP in Rtems 4.10 and using message queues to share data
> between nodes. I had no problems using queues until I add a task that
> make several rtems_message_queue_receive, after a while the node that
> receives from the queue crashes (using the debugger I saw it crashed
> in: "rtems_mpci_entry Shm_Get_packet()"). No issues when receiving
> messages if I set the "no wait" option:
> rtems_message_queue_receive(rtemsId, data, size, RTEMS_NO_WAIT, 0);
> But if I set a wait, for example, of 1 second:
> rtems_message_queue_receive(rtemsId, data, size, RTEMS_WAIT, 100);
> then the node that is blocked in the queue reception stop running
> after several calls to rtems_message_queue_receive. With a shared
> memory area of 0x1000 bytes rtems_message_queue_receive can be called
> 9 times and the next time it stops. With 0x2000 bytes: 19 calls and it
> crash with the following one. With 0x3000 bytes: 29 calls and then it
> stops. It seems that the shared memory area is filled?
>
> I do not see any errors, program just stop running (the other node
> runs correctly).
> For local queues we have no problems.
> Do you have any idea why this is happening?
>
> thanks in advance
> Diego
>
> /* Override default SHM configuration */
> shm_config_table BSP_shm_cfgtbl =
> { .base = (void *) 0x40000000, .length = 0x1000 };
I think you have a very specific problem and it is unlikely that someone
on the mailing list faced this before. I would try to unit test the
packet transfer mechanism and use a debugger to figure out what is going on.
--
Sebastian Huber, embedded brains GmbH
Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone : +49 89 189 47 41-16
Fax : +49 89 189 47 41-09
E-Mail : sebastian.huber at embedded-brains.de
PGP : Public key available on request.
Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
More information about the users
mailing list