MPCI not returning memory

Andres Monteverde amonteverde at invap.com.ar
Thu Jan 31 12:42:00 UTC 2019


Hi Sebastian, Thank you for your prompt response.

> I don't think this is correct. If you call
_MPCI_Send_request_packet(),
> then the ownership of this packet is transferred to the implementation
> of this function. With the shared memory driver, the packet is freed
by
> the receiver, see Shm_Free_envelope()."

If
 you think that it's not correct, we can move the returning of the 
packet inside _MPCI_Send_request_packet() but the reason why we return 
the packet outside _MPCI_Send_request_packet() is because we were 
focalized in queue behaviour and a change inside 
_MPCI_Send_request_packet() is a broad solution with an unknown effect 
for the test.

Regarding "the packet is freed by the receiver, see
 Shm_Free_envelope()", by looking at the code and running the code step 
by step with jtag, the core0 (the owner of the queue) does not return 
the packet in any case.
The core0 MPCI handler for the receive request is:

_Message_queue_MP_Process_packet()
case MESSAGE_QUEUE_MP_RECEIVE_REQUEST:

      the_packet->Prefix.return_code = rtems_message_queue_receive(
        the_packet->Prefix.id,
        the_packet->Buffer.buffer,
        &the_packet->size,
        the_packet->option_set,
        the_packet->Prefix.timeout
      );

      if ( the_packet->Prefix.return_code != RTEMS_PROXY_BLOCKING )
        _Message_queue_MP_Send_response_packet(
          MESSAGE_QUEUE_MP_RECEIVE_RESPONSE,
          the_packet->Prefix.id,
          _Thread_Executing
        );
      break;
 
The core1 MPCI handler for the receive response is:

_Message_queue_MP_Process_packet()
case MESSAGE_QUEUE_MP_RECEIVE_RESPONSE:

      the_thread = _MPCI_Process_response( the_packet_prefix );

      if (the_packet->Prefix.return_code == RTEMS_SUCCESSFUL) {
        *(size_t *) the_thread->Wait.return_argument =
           the_packet->size;

        _CORE_message_queue_Copy_buffer(
          the_packet->Buffer.buffer,
          the_thread->Wait.return_argument_second.mutable_object,
          the_packet->size
        );
      }

      _MPCI_Return_packet( the_packet_prefix );
      break;

As far as i understand is the core 1 who frees the packet.
And
 in case of a timeout waiting at the queue, there is no response at all 
from core 0 so there's no return for the allocated packet nor the proxy.

> I don't know the MPCI code good enough. It would help if you have a
> self-contained test case which runs with PSIM (the PowerPC GDB
built-in
> simulator).

Attached
 to this mail is the code of the sample app that generates this 
situation, the most important part is after line 186. But it's a very 
simple product-consumer queue example.

We have found that this 
problem has a work around by creating the message queue by the consumer.
 If you do that the timeout is always in a local object and avoids the 
proxy waiting and all problems related to the timeout management.



>>> Sebastian Huber <sebastian.huber at embedded-brains.de> 01/30/19 3:25
AM >>>
On 29/01/2019 20:32, Andres Monteverde wrote:
> Hi, my name is Andres and the purpose of this mail is to ask about a 
> situation we're dealing with and a possible solution.
> We are using rtems 4.10.2 in leon3 (gr712) AMP configuration with a 
> shared memory space of 0x1000 bytes.
> Our system consists of two applications. One of them is the owner of a

> global queue and the other is the consumer with a timeout.
> The app running on core 0 sends a message every five seconds and the 
> app running on core 1 receives them with a timeout of one second.
> After eigth timeouts, the application running in core 1 stops due an 
> internal error.
>
> Analysing the operations of the OS, we've seen that the internal error

> is generated by the lack of free shared memory, and it happens inside 
> the function _MPCI_Get_packet().
> And we believe that the lack of free memory is because previously 
> timed-out mpci packets are not returned.
> The same seems to happen with the proxys allocated in case of time 
> out, the> If we perform the changes below, the test app runs indefinitely 
> without any error.
> Whats your opinion about this?
>
>
> diff -r basicsw_el1618_rtems_4.10/rtems_4.10/cpukit/rtems/src/msgmp.c 
> original/basicsw_el1618_rtems_4.10/rtems_4.10/cpukit/rtems/src/msgmp.c
> 174c174
> <       rtems_status_code result = (rtems_status_code) 
> _MPCI_Send_request_packet(
> ---
> >       return (rtems_status_code) _MPCI_Send_request_packet(
> 179,184c179
> <
> <       _MPCI_Return_packet(&the_packet->Prefix);
> <
> <       return result;
> <
> <        break;
> ---
> >       break;

I don't think this is correct. If you call _MPCI_Send_request_packet(), 
then the ownership of this packet is transferred to the implementation 
of this function. With the shared memory driver, the packet is freed by 
the receiver, see Shm_Free_envelope().

> diff -r 
> basicsw_el1618_rtems_4.10/rtems_4.10/cpukit/score/src/threadqtimeout.c

>
original/basicsw_el1618_rtems_4.10/rtems_4.10/cpukit/score/src/threadqtimeout.c
> 51d50
> <         break;
> 55c54
> <         break;
> ---
> >       break;
> 57,61c56,58
> <         //todo check for critical section

This is an important TODO. The code below doesn't work under all
conditions.

> <      the_thread->Wait.return_code = 
> the_thread->Wait.queue->timeout_status;
> <         _Thread_queue_Extract_with_proxy(the_thread);
> <         _Thread_Unnest_dispatch();
> <         break;
> ---
> >       _Thread_queue_Process_timeout( the_thread );
> >       _Thread_Unnest_dispatch();
> >       break;

I don't know the MPCI code good enough. It would help if you have a 
self-contained test case which runs with PSIM (the PowerPC GDB built-in 
simulator).

-- 
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax     : +49 89 189 47 41-09
E-Mail  : sebastian.huber at embedded-brains.de
PGP     : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/users/attachments/20190131/c0ccb28b/attachment-0002.html>


More information about the users mailing list