<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jun 10, 2020 at 5:57 PM Chris Johns <<a href="mailto:chrisj@rtems.org">chrisj@rtems.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 11/6/20 9:30 am, Jonathan Brandmeyer wrote:<br>
> We've patched the RTEMS kernel in order to support using the Zynq on-chip memory<br>
> as inner-cacheable memory. The enclosed patch should apply cleanly to master.<br>
> <br>
> Background: During normal startup, the ROM bootloader performs vendor-specific<br>
> initialization of core 1, and then sits in a wait-for-event loop until a<br>
> special value has been written to a specific address in OCM. In that state, the<br>
> MMU has not yet been initialized and core 1 is treating OCM as Device memory.<br>
> <br>
> By the time the RTEMS boot gets to _CPU_SMP_Start_processor, core 0's MMU has<br>
> already been initialized with the application-defined memory map. I'd like to<br>
> use the on-chip memory as inner cacheable memory in my application. In order to<br>
> ensure that the kick address write actually becomes visible to core 1, a cache<br>
> line flush of the affected line is necessary prior to sending the event that<br>
> wakes up the other core.<br>
<br>
Have the patches been tested with the OCM in the default state?<br></blockquote><div><br></div><div>Yes. Performing a cache flush by virtual address to a line which has Device memory attributes appears to be harmless.<br></div><div> <br></div></div>-- <br><div dir="ltr" class="gmail_signature">Jonathan Brandmeyer<br>PlanetiQ</div></div>