<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr"><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Apr 3, 2023 at 8:00 PM Chris Johns <<a href="mailto:chrisj@rtems.org">chrisj@rtems.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 31/3/2023 8:13 am, Kinsey Moore wrote:<br>
> Xilinx wrote their A53 HAL with the assumption that the CPU did not<br>
> support cache invalidation without a flush, so the flush and<br>
> invalidation functions were combined and all range invalidations are<br>
> promoted to flush/invalidate. The implementation written for lwIP was<br>
> written to the original intent of the function and thus was not flushing<br>
> in some cases when it needed to. This resolves that issue which prevents<br>
> DMA transmit errors in some cases.<br>
> ---<br>
> rtemslwip/zynqmp/xil_shims.c | 7 ++++++-<br>
> 1 file changed, 6 insertions(+), 1 deletion(-)<br>
> <br>
> diff --git a/rtemslwip/zynqmp/xil_shims.c b/rtemslwip/zynqmp/xil_shims.c<br>
> index 2eda0c5..1b1b3cf 100644<br>
> --- a/rtemslwip/zynqmp/xil_shims.c<br>
> +++ b/rtemslwip/zynqmp/xil_shims.c<br>
> @@ -102,7 +102,12 @@ void XScuGic_DisableIntr ( u32 DistBaseAddress, u32 Int_Id )<br>
> rtems_interrupt_vector_disable( Int_Id );<br>
> }<br>
> <br>
> +/*<br>
> + * The Xilinx code was written such that it assumed there was no invalidate-only<br>
> + * functionality on A53 cores. This function must flush and invalidate because<br>
> + * of how they mapped things.<br>
> + */<br>
> void Xil_DCacheInvalidateRange( INTPTR adr, INTPTR len )<br>
> {<br>
> - rtems_cache_invalidate_multiple_data_lines( (const void *) adr, len );<br>
> + rtems_cache_flush_multiple_data_lines( (const void *) adr, len );<br>
> }<br>
<br>
Does the Xilinx code use Xil_DCacheInvalidateRange in any DMA receive paths? If<br>
it does is this change correct as the invalidate has been removed?<br></blockquote><div><div dir="ltr"><br></div><div dir="ltr">It just so happens that the way the
code was written, a flush and invalidate works fine for the receive
path. The invalidation that occurs in the receive path occurs before the
pointer to the memory is passed to the DMA engine, so a flush there doesn't hurt anything (at least for this particular driver). If more Xilinx drivers get pulled in, that may have to be reevaluated.</div><div dir="ltr"><br></div><div>Kinsey<br></div> </div></div></div>