Hi igorpadykov,
First of all thanks for the email the tips. It was exactly that example that we were looking into !
We would like to ask you another question:
We are experimenting with the partition management SCFW API on the iMX8QM-MEK (SCFW) and aiming at understanding the flexibility to implement different use cases with different resources. We are particularly interested in a use case where we would like to run different execution environments, not on different clusters (e.g., Linux on the 4xA53 and a customized Baremetal application on 2xA72), but on the same cluster (e.g., Linux on 3xA53 and a customized Baremetal application on 1xA53). While we were able to isolate the different execution environments atop different clusters (Linux on the A53 cluster and a customized Baremetal application on the A72 cluster), we didn't succeed (and we tried very hard!) in isolating the different execution environments on cores of the same cluster. We created a setup were we gave one A53 core to a partition, pt_sep, and the other cores are kept in pt_boot. Further we created a memory region which only pt_sep should be able to access. We are able start the cores, but the partition assigned with the single A53 and a specific memory region, pt_sep, will trigger a fault when reading/writing to the memory region assigned to it.
Since we are not able to get any specific information from the xRDC2 itself for this platform (since it is abstracted/exposed by the SCFW), and the SCFW documentation doesn't state it explicitly, we would like to know if this kind of configuration, were cpu cores of the same cluster are isolated from each other, is possible? We have made many attempts, and our feeling tells us that the A53 cores may have the same master id and this may impose a limitation to the isolation hardware itself (xRDC2).
Thank you.