IMX8QM Boot Logic Partitions - SCFW API

cancel
Showing results for 
Search instead for 
Did you mean: 

IMX8QM Boot Logic Partitions - SCFW API

556 Views
Contributor III

Greetings,We are currently working with the iMX8QM, and are trying a few different methods of bootstraping the environment. In one of our experiments we are trying to get the cortex-m4_0 to run as the boot partition, while running another partition for the AP clusters.To make this happen we created another partiton, pt_a, which we assigned all resources which were previously assigned to boot. And the boot partition is left assigned with all resources which were attributed to the m4_0 partition.


By previously I mean, the default boards.c file. We have not been sucessfull in booting the AP clusters.
We changed the iMX8QM imx-mkimage makefile rule to the following:

flash_linux_m4: $(MKIMG) mx8qm-ahab-container.img scfw_tcm.bin u-boot-spl.bin m4_image.bin m4_1_image.bin u-boot-atf-container.img

./$(MKIMG) -soc QM -rev B0 -dcd skip -append mx8qm-ahab-container.img -c -flags 0x00200000 -scfw scfw_tcm.bin -p1 -m4 m4_image.bin 0 0x34FE0000 -p3 -ap u-boot-spl.bin a53 0x00100000 -out flash.bin

With this we were expecting the SCU to be able to boot the AP clusters as partition 3, but this did not happen. Furthermore we tried to boot the AP from the cortex-M4_0 (which now owns the boot partition), but we can only wake up the AP to run in the DDRr. Trying to wake it up to run from the address 0x100000 (as is specified by default in the makefile), does not work and returns error 3: bad parameters. We think it may be related to some OCM (On Chip Ram) particularity.We considered loading the image ourselves from the m4_0 binary, which we would load into DDR instead of TCM, but we would really like to avoid this.Is there something we are doing wrong? From our understanding of the SCU API we should be able to do want we are trying to achieve.

Tags (3)
0 Kudos
3 Replies

220 Views
NXP TechSupport
NXP TechSupport

Hi Daniel

partitioning examples can be found on below links, also example was sent via mail:

i.MX8 Boot process and creating a bootable image 

https://community.nxp.com/docs/DOC-341481 

Best regards
igor
-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

0 Kudos

220 Views
Contributor III

Hi igorpadykov,

First of all thanks for the email the tips. It was exactly that example that we were looking into ! 

We would like to ask you another question: 

We are experimenting with the partition management SCFW API on the iMX8QM-MEK (SCFW) and aiming at understanding the flexibility to implement different use cases with different resources. We are particularly interested in a use case where we would like to run different execution environments, not on different clusters (e.g., Linux on the 4xA53 and a customized Baremetal application on 2xA72), but on the same cluster (e.g., Linux on 3xA53 and a customized Baremetal application on 1xA53). While we were able to isolate the different execution environments atop different clusters (Linux on the A53 cluster and a customized Baremetal application on the A72 cluster), we didn't succeed (and we tried very hard!) in isolating the different execution environments on cores of the same cluster. We created a setup were we gave one A53 core to a partition, pt_sep, and the other cores are kept in pt_boot. Further we created a memory region which only pt_sep should be able to access. We are able start the cores, but the partition assigned with the single A53 and a specific memory region, pt_sep, will trigger a fault when reading/writing to the memory region assigned to it.

Since we are not able to get any specific information from the xRDC2 itself for this platform (since it is abstracted/exposed by the SCFW), and the SCFW documentation doesn't state it explicitly, we would like to know if this kind of configuration, were cpu cores of the same cluster are isolated from each other, is possible? We have made many attempts, and our feeling tells us that the A53 cores may have the same master id and this may impose a limitation to the isolation hardware itself (xRDC2).

Thank you.

0 Kudos

220 Views
Contributor I

You can't really use xRDC2 to partition cores within one cluster because they share the L2 cache and xRDC2 works on the bus level.

0 Kudos