I’m working on a project using the iMX6 SoloX . The M4 core gathers data and feeds it to the A9 core for display. The data points coming into the M4 come asynchronously to each other and are passed upstream as a single 32-bit value. As some of the values are read several times per second, this means that ~ 200 values/second get passed to the A9. The usual way to pass data between processors is to use the MU. The problem with this is that each message results in an interrupt to the A9 with the accompanying context switching just to pass one 32-bit value.
I’m exploring the possibility of using a form of shared memory that both processors can access without using the MU. Two potential issues exist with this method: the A9 and the M4 trying to access the same location at the same time, and cache/data coherency.
What will happen if both cores try to access the same memory location at the same time; is there a bus arbitrator that prevents a problem? Would it generate a bus fault? In this project, the M4 always write to the location (never reads) and the A9 only reads. Also it is not important that the A9 get the very latest data from a location as the values on the screen get updated ~15 times/sec. There is no control done with the data; only monitoring. Just trying to see if this system would work (especially without using a mutex).
For the second issue on caches, the M4 could of course due a dcache flush line every time it writes, and the A9 do a dcache invalidate line to get the latest data from memory. This takes time and bus bandwidth. I know that the A9 has a snoop control unit (SCU) which keeps the A9 cache current when using DMA devices. Does this also work between the A9 and M4 – will it ensure coherency of data between the A9 and M4?