AnsweredAssumed Answered

How to use the D cache in an MPC5748G multi-core MCU

Question asked by Peter Vranken on Oct 21, 2018

Dear NXP Team,

 

I'm starting the development with a MPC5748G device. The cores are running
with enabled I and D caches. I would like to understand, which
possibilities I have to realize some inter-core communication.

 

1) The easiest way seems to be using dedicated uncached memory areas and some
critical section mechanisms (using the semaphores) or memory barriers (using
mbar), depending on the kind of data flow. This sounds straightforward but
where are the unexpected pit-falls?

 

2) My understanding of the D cache is that a writing core will put the
data at the same time into its cache and the main memory behind. If this
is right then it would become possible to safely implement uni-directional
data flow with a shared memory which is used with D cache at the producer
side and without cache for the consumer. Right?

 

Is the secondary storage into main memory done immediately in the same
write cycle as the update of the cache contents? Or is the secondary
storage in main memory subject to some buffer and flush strategy so that
the ordering of the storage in main memory could differ from the ordering
of the primary storage in the cache? This concern leads to the next
question:

 

If we use a memory barrier based notification (e.g. first update payload
data, then put a barrier, finally update the notification flag), will the
guarantee that the CPU first completely writes the data and only then
writes the flag still hold for the main memory (i.e. the secondarily
written storage)?

 

3) The tight coupling of cores and memory with the xbars tempts to
consider the complete RAM space as a shared memory. I wonder, if this can
be implemented with D cache on? Is there a hardware mechanism that
notifies the cache of core A that its contents became invalid because of a
write to the according addresses by core B? Is there otherwise a software
way of notifying (B to A) or invalidating the other core's cache?

 

4) Does it make any difference if DMA is used to write some data into RAM?
Will cached read of the DMA destination address area fail? Or is there a
hardware mechanism that invalidates the cache of the reading core so that
it really gets the DMA written information? Or is the concept that a core
itself invalidates its own cache after it received the
DMA-completed-interrupt but prior to reading the DMA written contents?

 

If so, this concept could be implemented for core-to-core, too, using a
software interrupt. Right? Is this a typical or even recommended way to
do?

 

5) Any more hints? Available, specific documentation on these topics?

 

Best regards

Peter

Outcomes