Hi,
I was trying to create 8 partition by editing the hv.dts file. But currently after boot up into Hypervisor, it shows the error of "Could not load the image". As shown below,
[0] =======================================
[0] Freescale Hypervisor 1.3-009
[0] Hypervisor command line: config-addr=0xfe8900000 console=ttyS0,115200
[0] malloc_init: using 14100 KiB at 0x7f130150 - 0x7fef4fff
[0] malloc_init: using 1060 KiB at 0x7fef6000 - 0x7fffefff
[0] dt_read_aliases: Alias pci1 points to non-existent /pcie@ffe201000
[0] read_pma: phys-mem-area lnx1_pma is not a power of two
[6] read_pma: phys-mem-area lnx7_pma is not a power of two
[3] read_pma: phys-mem-area lnx4_pma is not a power of two
[7] read_pma: phys-mem-area lnx8_pma is not a power of two
[1] read_pma: phys-mem-area lnx2_pma is not a power of two
[2] read_pma: phys-mem-area lnx3_pma is not a power of two
[0] read_pma: phys-mem-area lnx1_pma is not a power of two
[5] read_pma: phys-mem-area lnx6_pma is not a power of two
[4] read_pma: phys-mem-area lnx5_pma is not a power of two
[0] assign_callback: device buffer-pool@7 in buffer-pool@7 not found
[0] assign_callback: device buffer-pool@8 in buffer-pool@8 not found
[0] assign_callback: device buffer-pool@9 in buffer-pool@9 not found
[0] assign_callback: device buffer-pool@16 in buffer-pool@16 not found
[0] assign_callback: device buffer-pool@17 in buffer-pool@17 not found
[0] assign_callback: device /fsl,dpaa/ethernet@16 in dpa-ethernet@16 not found
[0] assign_callback: device /fsl,dpaa/dpa-fman0-oh@1 in dpa-fman0-oh@1 not found
[1] get_rpn: mem-range has unmapped guest address at 0x0.
[1] get_rpn: mem-range has unmapped guest address at 0x0.
[1] get_rpn: mem-range has unmapped guest address at 0x0.
[1] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] hv_pamu_config_liodn: liodn 146 or device in use
[0] configure_liodn: config of liodn failed (rc=-259)
[0] hv_pamu_config_liodn: liodn 146 or device in use
[0] configure_liodn: config of liodn failed (rc=-259)
[0] hv_pamu_config_liodn: liodn 146 or device in use
[0] configure_liodn: config of liodn failed (rc=-259)
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] get_rpn: mem-range has unmapped guest address at 0x0.
[0] Loading uImage from 0xfe9300000 to 0x1300000
[2] Loading uImage from 0xfe9300000 to 0x1300000
[3] Loading uImage from 0xfe9300000 to 0x1300000
[1] Loading uImage from 0xfe9300000 to 0x1300000
[2] copy_phys_to_gphys: cannot map dest 1300000, 0 bytes
[5] Loading uImage from 0xfe9300000 to 0x1300000
[2] load_uimage: cannot copy
[1] copy_phys_to_gphys: cannot map dest 1300000, 0 bytes
[3] copy_phys_to_gphys: cannot map dest 1300000, 0 bytes
[5] copy_phys_to_gphys: cannot map dest 1300000, 0 bytes
[6] Loading uImage from 0xfe9300000 to 0x1300000
[1] load_uimage: cannot copy
[3] load_uimage: cannot copy
[4] Loading uImage from 0xfe9300000 to 0x1300000
[6] copy_phys_to_gphys: cannot map dest 1300000, 0 bytes
[0] copy_phys_to_gphys: cannot map dest 1300000, 0 bytes
[3] guest p4-linux: could not load image
[5] load_uimage: cannot copy
[7] Loading uImage from 0xfe9300000 to 0x1300000
[1] guest p2-linux: could not load image
[2] guest p3-linux: could not load image
[4] copy_phys_to_gphys: cannot map dest 1300000, 0 bytes
[6] load_uimage: cannot copy
[0] load_uimage: cannot copy
[3] loading binary image from 0xfe8020000 to 0
[5] guest p6-linux: could not load image
[6] guest p7-linux: could not load image
[1] loading binary image from 0xfe8020000 to 0
[7] copy_phys_to_gphys: cannot map dest 1300000, 0 bytes
[0] guest p1-linux: could not load image
[3] copy_phys_to_gphys: cannot map dest 0, 0 bytes
[2] loading binary image from 0xfe8020000 to 0
[1] copy_phys_to_gphys: cannot map dest 0, 0 bytes
[3] guest p4-linux: could not load image
[6] loading binary image from 0xfe8020000 to 0
[7] load_uimage: cannot copy
[4] load_uimage: cannot copy
[7] guest p8-linux: could not load image
[1] guest p2-linux: could not load image
[2] copy_phys_to_gphys: cannot map dest 0, 0 bytes
[4] guest p5-linux: could not load image
[5] loading binary image from 0xfe8020000 to 0
[6] copy_phys_to_gphys: cannot map dest 0, 0 bytes
[0] loading binary image from 0xfe8020000 to 0
[2] guest p3-linux: could not load image
[5] copy_phys_to_gphys: cannot map dest 0, 0 bytes
[0] copy_phys_to_gphys: cannot map dest 0, 0 bytes
[5] guest p6-linux: could not load image
[4] loading binary image from 0xfe8020000 to 0
[6] guest p7-linux: could not load image
[4] copy_phys_to_gphys: cannot map dest 0, 0 bytes
[7] loading binary image from 0xfe8020000 to 0
[0] guest p1-linux: could not load image
[7] copy_phys_to_gphys: cannot map dest 0, 0 bytes
[4] guest p5-linux: could not load image
[7] guest p8-linux: could not load image
HV>
I was following this document Section 4.2.1.6. Hypervisor Deployment
http://cache.freescale.com/files/soft_dev_tools/doc/support_info/QorIQ-SDK-1.6-IC-RevB.pdf
Here are the command I used, are they correct?
.2.1.6 Hypervisor Deployment
4.2.1.6.1 Introduction
At the U-Boot prompt, set the environment:
=>setenv bootargs config-addr=0xfe8900000 console=ttyS0,115200
=>setenv bootcmd 'bootm 0xfe8700000 - 0xfe8800000'
=>saveenv
4.2.1.6.2
switch to bank 4(assume alt bank)
1. Program kernel to flashtftp 1000000 /tftpboot/p4080ds/u-boot-P4080DS-2014.01+fslgit-r0.bin
=>tftp 1000000 /tftpboot/p4080ds/uImage
=>erase e8020000 +$filesize
=>cp.b 1000000 e8020000 $filesize
2. Program ramdisk filesystem to Flash
=>tftp 1000000 /tftpboot/p4080ds/fsl-image-core-p4080ds.tar.gz
=>erase e9300000 +$filesize
=>cp.b 1000000 e9300000 $filesize
3. Program Hypervisor image to Flash
=>tftp 1000000 /tftpboot/p4080ds/hv/hv.uImage
=>erase e8700000 +$filesize
=>cp.b 1000000 e8700000 $filesize
4. Program Kernel dtb to flash
=>tftp 1000000 /tftpboot/yocto/boot/p4080ds-usdpaa.dtb
=>erase e8800000 +$filesize
=>cp.b 1000000 e8800000 $filesize
5. Program HV dtb to Flash
=>tftp 1000000 /tftpboot/p4080ds/hv-cfg/R_PPSXX_0xe/hv-2p-lnx-lnx.dtb
=>erase e8900000 +$filesize
=>cp.b 1000000 e8900000 $filesize
6. Booting up DS board
=>boot
Is there anything I did wrong? Did I transfer the wrong file?
Thanks,
Peter
That log shows a lot of errors in the hv config tree that need to be fixed.
Can u help me take a look at my hv.dts file? Thanks
Please attach your hv.dts if you'd like me to look at it.
As the error messages say, you have PMAs that are not powers of two. This is not allowed (largely due to the relationship between PMAs and coherency subdomains). If you want a partition to have a non-power-of-two memory size you need to use multiple PMAs. A lot of the other error messages are consequences of having invalid PMAs.
Hi Scott,
I have modified the size of PMA into 128MB, but it seems the problem remains. Can you help me take a look?
Here are the error message at boot-up:
[0] ======================================= | |
[0] Freescale Hypervisor 1.3-009 | |
[0] Hypervisor command line: config-addr=0xfe8900000 console=ttyS0,115200 | |
[0] malloc_init: using 14096 KiB at 0x7f131150 - 0x7fef4fff | |
[0] malloc_init: using 1060 KiB at 0x7fef6000 - 0x7fffefff | |
[0] dt_read_aliases: Alias pci1 points to non-existent /pcie@ffe201000 | |
[0] read_pma: phys-mem-area lnx2_pma is not naturally aligned | |
[2] read_pma: phys-mem-area lnx3_pma is not naturally aligned | |
[1] read_pma: phys-mem-area lnx2_pma is not naturally aligned | |
[0] assign_callback: device buffer-pool@16 in buffer-pool@16 not found | |
[0] assign_callback: device buffer-pool@17 in buffer-pool@17 not found |
[0] assign_callback: device /fsl,dpaa/ethernet@16 in dpa-ethernet@16 not found
[1] get_rpn: mem-range has unmapped guest address at 0x0. | ||||
[1] get_rpn: mem-range has unmapped guest address at 0x0. | ||||
[1] get_rpn: mem-range has unmapped guest address at 0x0. | ||||
[1] get_rpn: mem-range has unmapped guest address at 0x0. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] hv_pamu_config_liodn: liodn 146 or device in use | ||||
[0] configure_liodn: config of liodn failed (rc=-259) | ||||
[0] hv_pamu_config_liodn: liodn 146 or device in use | ||||
[0] configure_liodn: config of liodn failed (rc=-259) | ||||
[0] hv_pamu_config_liodn: liodn 146 or device in use | ||||
[0] configure_liodn: config of liodn failed (rc=-259) | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] get_rpn: mem-range has unmapped guest address at 0x8000000. | ||||
[0] Loading uImage from 0xfe9300000 to 0x1300000 | ||||
[3] Loading uImage from 0xfe9300000 to 0x1300000 | ||||
[2] Loading uImage from 0xfe9300000 to 0x1300000 | ||||
[6] Loading uImage from 0xfe9300000 to 0x1300000 | ||||
[2] copy_phys_to_gphys: cannot map dest 1300000, 0 bytes | ||||
[4] Loading uImage from 0xfe9300000 to 0x1300000 | ||||
[7] Loading uImage from 0xfe9300000 to 0x1300000 | ||||
[1] Loading uImage from 0xfe9300000 to 0x1300000 | ||||
[2] load_uimage: cannot copy | ||||
[5] Loading uImage from 0xfe9300000 to 0x1300000 | ||||
[1] copy_phys_to_gphys: cannot map dest 1300000, 0 bytes | ||||
[2] guest p3-linux: could not load image | ||||
[1] load_uimage: cannot copy | ||||
[1] guest p2-linux: could not load image | ||||
[2] Loading uImage from 0xfe8020000 to 0 | ||||
[2] load_uimage: image size exceeds target window | ||||
[1] Loading uImage from 0xfe8020000 to 0 | ||||
[2] guest p3-linux: could not load image | ||||
[1] load_uimage: image size exceeds target window | ||||
[1] guest p2-linux: could not load image | ||||
HV> info | ||||
Partition Name | State | Vcpus | ||
------------------------------------------- | ||||
1 | p1-linux | starting | 1 | |
2 | p2-linux | stopped | 1 | |
3 | p3-linux | stopped | 1 | |
4 | p4-linux | starting | 1 | |
5 | p5-linux | starting | 1 | |
6 | p6-linux | starting | 1 | |
7 | p7-linux | starting | 1 | |
8 | p8-linux | starting | 1 | |
HV> info | ||||
Partition Name | State | Vcpus | ||
------------------------------------------- | ||||
1 | p1-linux | starting | 1 | |
2 | p2-linux | stopped | 1 | |
3 | p3-linux | stopped | 1 | |
4 | p4-linux | starting | 1 | |
5 | p5-linux | starting | 1 | |
6 | p6-linux | starting | 1 | |
7 | p7-linux | starting | 1 | |
8 | p8-linux | starting | 1 | |
[6] Loading uImage from 0xfe8020000 to 0 | ||||
[6] load_uimage: image size exceeds target window | ||||
[3] Loading uImage from 0xfe8020000 to 0 | ||||
[6] guest p7-linux: could not load image | ||||
[3] load_uimage: image size exceeds target window | ||||
[3] guest p4-linux: could not load image | ||||
[7] Loading uImage from 0xfe8020000 to 0 | ||||
[7] load_uimage: image size exceeds target window | ||||
[4] Loading uImage from 0xfe8020000 to 0 | ||||
[4] load_uimage: image size exceeds target window | ||||
[7] guest p8-linux: could not load image | ||||
[4] guest p5-linux: could not load image | ||||
[5] Loading uImage from 0xfe8020000 to 0 | ||||
[5] load_uimage: image size exceeds target window | ||||
[5] guest p6-linux: could not load image | ||||
[0] Loading uImage from 0xfe8020000 to 0 | ||||
[0] load_uimage: image size exceeds target window | ||||
[0] guest p1-linux: could not load image | ||||
HV> info | ||||
Partition Name | State | Vcpus | ||
------------------------------------------- | ||||
1 | p1-linux | stopped | 1 | |
2 | p2-linux | stopped | 1 | |
3 | p3-linux | stopped | 1 | |
4 | p4-linux | stopped | 1 | |
5 | p5-linux | stopped | 1 | |
6 | p6-linux | stopped | 1 | |
7 | p7-linux | stopped | 1 | |
8 | p8-linux | stopped | 1 |
HV>
And attached is my updated hv.dts.
Thanks & Regards,
Peter
Hello Peter,
It seems there is problem with lnx2_pma size definition in hv.dts file.
Please refer to the following hypervisor code for the error " phys-mem-area lnx2_pma is not naturally aligned"
if (pma->start & (pma->size - 1)) {
printlog(LOGTYPE_PARTITION, LOGLEVEL_ERROR,
"%s: phys-mem-area %s is not naturally aligned\n",
__func__, node->name);
goto out_free;
}
Thanks,
Yiping
Hi Yiping,
Thank you for your reply. Could you suggest how can I modify the corresponding pma to pass this error?
Also, for the other error, could you help take a look as well?
Thank you very much for your help.
Regards,
Peter
Natural alignment means the pma needs to be aligned to its size. If your PMA is 128 MiB, it needs to be aligned to an 128 MiB boundary. The other errors may be a consequence of this one, so address this one first.
Hi Scott,
I have fixed the alignment issue. But still cannot boot up.Would you mind taking a look again?
[0] =======================================
[0] Freescale Hypervisor 1.3-009
[0] Hypervisor command line: config-addr=0xfe8900000 console=ttyS0,115200
[0] malloc_init: using 14096 KiB at 0x7f131150 - 0x7fef4fff
[0] malloc_init: using 1060 KiB at 0x7fef6000 - 0x7fffefff
[0] dt_read_aliases: Alias pci1 points to non-existent /pcie@ffe201000
[0] assign_callback: device buffer-pool@16 in buffer-pool@16 not found
[0] assign_callback: device buffer-pool@17 in buffer-pool@17 not found
[0] assign_callback: device /fsl,dpaa/ethernet@16 in dpa-ethernet@16 not found
[1] get_rpn: mem-range has unmapped guest address at 0x8000000.
[1] get_rpn: mem-range has unmapped guest address at 0x8000000.
[1] get_rpn: mem-range has unmapped guest address at 0x8000000.
[1] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] hv_pamu_config_liodn: liodn 146 or device in use
[0] configure_liodn: config of liodn failed (rc=-259)
[0] hv_pamu_config_liodn: liodn 146 or device in use
[0] configure_liodn: config of liodn failed (rc=-259)
[0] hv_pamu_config_liodn: liodn 146 or device in use
[0] configure_liodn: config of liodn failed (rc=-259)
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] get_rpn: mem-range has unmapped guest address at 0x8000000.
[0] Loading uImage from 0xfe9300000 to 0x1300000
[1] Loading uImage from 0xfe9300000 to 0x1300000
[3] Loading uImage from 0xfe9300000 to 0x1300000
[6] Loading uImage from 0xfe9300000 to 0x1300000
[7] Loading uImage from 0xfe9300000 to 0x1300000
[2] Loading uImage from 0xfe9300000 to 0x1300000
[4] Loading uImage from 0xfe9300000 to 0x1300000
[5] Loading uImage from 0xfe9300000 to 0x1300000
HV> info
Partition Name State Vcpus
-------------------------------------------
1 p1-linux starting 1
2 p2-linux starting 1
3 p3-linux starting 1
4 p4-linux starting 1
5 p5-linux starting 1
6 p6-linux starting 1
7 p7-linux starting 1
8 p8-linux starting 1
HV> info
Partition Name State Vcpus
-------------------------------------------
1 p1-linux starting 1
2 p2-linux starting 1
3 p3-linux starting 1
4 p4-linux starting 1
5 p5-linux starting 1
6 p6-linux starting 1
7 p7-linux starting 1
8 p8-linux starting 1
[1] Loading uImage from 0xfe8020000 to 0
[1] load_uimage: image size exceeds target window
[1] guest p2-linux: could not load image
[3] Loading uImage from 0xfe8020000 to 0
[3] load_uimage: image size exceeds target window
[3] guest p4-linux: could not load image
[7] Loading uImage from 0xfe8020000 to 0
[7] load_uimage: image size exceeds target window
[7] guest p8-linux: could not load image
[6] Loading uImage from 0xfe8020000 to 0
[6] load_uimage: image size exceeds target window
[6] guest p7-linux: could not load image
[2] Loading uImage from 0xfe8020000 to 0
[2] load_uimage: image size exceeds target window
[2] guest p3-linux: could not load image
[4] Loading uImage from 0xfe8020000 to 0
[4] load_uimage: image size exceeds target window
[4] guest p5-linux: could not load image
[5] Loading uImage from 0xfe8020000 to 0
[5] load_uimage: image size exceeds target window
[5] guest p6-linux: could not load image
[0] Loading uImage from 0xfe8020000 to 0
[0] load_uimage: image size exceeds target window
[0] guest p1-linux: could not load image
HV>
Btw, attached is the uImage I used, It was flashed using the following command before.
=>tftp 1000000 /tftpboot/p4080ds/uImage--3.12-r0-p4080ds-20150311135235.bin
=>erase e8020000 +$filesize
=>cp.b 1000000 e8020000 $filesize
Could this because the uImage was wrong? Then which file in the build folder should I flash?
Thanks,
Peter
!
Please attach your updated hv.dts.
Hello Peter,
In the dts file, you need to modify the lnx2_pma allocation as the following.
lnx2_pma: lnx2_pma { | |
compatible = "phys-mem-area"; | |
addr = <0x0 0x08000000>; // Linux | |
size = <0x0 0x08000000>; // 128MB | |
//PETER:allocate cpc-way to it | |
allocate-cpc-ways = <4 5 6 7>; |
};
The address 0xe8020000 is the current bank address, you need to jump to the altbank and do flash programming here.
Would you please also provide your u-boot log(including u-boot setup log, your actions performing and hypervisor setup log) to let us do more investigation?
Have a great day,
Yiping
-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------
Hi Yiping,
Thank you for your reply. I have made the change you suggest on the alignment issue. But it seems the problem remains.
As required, following the the u-boot log:
U-Boot 2014.01QorIQ-SDK-V1.6+gfe1d4f5 (Jun 08 2014 - 23:29:25)
CPU0: P4080E, Version: 3.0, (0x82080030)
Core: e500mc, Version: 3.1, (0x80230031)
Clock Configuration:
CPU0:1499.985 MHz, CPU1:1499.985 MHz, CPU2:1499.985 MHz, CPU3:1499.985 MH
z,
CPU4:1499.985 MHz, CPU5:1499.985 MHz, CPU6:1499.985 MHz, CPU7:1499.985 MH
z,
CCB:799.992 MHz,
DDR:649.994 MHz (1299.987 MT/s data rate) (Asynchronous), LBC:99.999 MHz
FMAN1: 599.994 MHz
FMAN2: 599.994 MHz
QMAN: 399.996 MHz
PME: 599.994 MHz
L1: D-cache 32 KiB enabled
I-cache 32 KiB enabled
Reset Configuration Word (RCW):
00000000: 105a0000 00000000 1e1e181e 0000cccc
00000010: 3842440c 3c3c2000 de800000 e1000000
00000020: 00000000 00000000 00000000 008b6000
00000030: 00000000 00000000 00000000 00000000
Board: P4080DS, Sys ID: 0x17, Sys Ver: 0x01, FPGA Ver: 0x0c, vBank: 0
SERDES Reference Clocks: Bank1=100MHz Bank2=125MHz Bank3=125MHz
I2C: ready
SPI: ready
DRAM: Initializing....using SPD
Detected UDIMM i-DIMM
Detected UDIMM i-DIMM
2 GiB left unmapped
4 GiB (DDR3, 64-bit, CL=9, ECC on)
DDR Controller Interleaving Mode: cache line
DDR Chip-Select Interleaving Mode: CS0+CS1
Testing 0x00000000 - 0x7fffffff
Testing 0x80000000 - 0xffffffff
Remap DDR 2 GiB left unmapped
POST memory PASSED
Flash: 128 MiB
L2: 128 KiB enabled
Corenet Platform Cache: 2 MiB enabled
SRIO1: disabled
SRIO2: disabled
MMC: FSL_SDHC: 0
EEPROM: NXID v1
PCIe1: Root Complex, no link, regs @ 0xfe200000
PCIe1: Bus 00 - 00
PCIe2: disabled
PCIe3: Root Complex, x1 gen1, regs @ 0xfe202000
02:00.0 - 1095:3132 - Mass storage controller
PCIe3: Bus 01 - 02
In: serial
Out: serial
Err: serial
Net: Fman1: Uploading microcode version 106.2.14
Phy not found
PHY reset timed out
Fman2: Uploading microcode version 106.2.14
Phy not found
PHY reset timed out
FM1@DTSEC2 [PRIME], FM1@TGEC1, FM2@DTSEC3, FM2@DTSEC4, FM2@TGEC1
Hit any key to stop autoboot: 0
=>
And this is the boot up environment:
=> pri
baudrate=115200
bdev=sda1
bootargs=config-addr=0xfe8900000 console=ttyS0,115200
bootcmd=bootm 0xfe8700000 - 0xfe8800000
bootdelay=3
bootfile=uImage
consoledev=ttyS0
eth1addr=00:e0:0c:00:67:01
eth2addr=00:e0:0c:00:67:02
eth3addr=00:e0:0c:00:67:03
eth4addr=00:e0:0c:00:67:04
eth5addr=00:e0:0c:00:67:05
eth6addr=00:e0:0c:00:67:06
eth7addr=00:e0:0c:00:67:07
eth8addr=00:e0:0c:00:67:09
eth9addr=00:e0:0c:00:67:09
ethact=FM1@DTSEC2
ethaddr=00:e0:0c:00:67:00
ethprime=FM1@DTSEC2
fdtaddr=c00000
fdtfile=uImage-p4080ds.dtb
fman_ucode=eff00000
gatewayip=192.168.1.1
hvboot=setenv bootargs console=ttyS0,115200 config-addr=0xfe8900000;bootm 0xfe87
00000 - 0xfe8800000
hwconfig=fsl_ddr:ctlr_intlv=cacheline,bank_intlv=cs0_cs1;fsl_fm1_xaui_phy:xfi;fs
l_fm2_xaui_phy:xfi
ipaddr=192.168.1.103
loadaddr=1000000
netdev=eth0
netmask=255.255.255.0
nfsboot=setenv bootargs root=/dev/nfs rw nfsroot=$serverip:$rootpath ip=$ipaddr:
$serverip:$gatewayip:$netmask:$hostname:$netdev:off console=$consoledev,$baudrat
e $othbootargs;tftp $loadaddr $bootfile;tftp $fdtaddr $fdtfile;bootm $loadaddr -
$fdtaddr
ramboot=setenv bootargs root=/dev/ram rw console=$consoledev,$baudrate $othboota
rgs;tftp $ramdiskaddr $ramdiskfile;tftp $loadaddr $bootfile;tftp $fdtaddr $fdtfi
le;bootm $loadaddr $ramdiskaddr $fdtaddr
ramdiskaddr=2000000
ramdiskfile=fsl-image-flash-p4080ds.ext2.gz.u-boot
rootpath=/opt/nfsroot
sataboot=setenv bootargs root=/dev/sda1 rootdelay=5 rw console=$consoledev,$baud
rate $othbootargs;bootm e8020000 - e8800000
serverip=192.168.1.101
stderr=serial
stdin=serial
stdout=serial
tftpflash=tftpboot $loadaddr $uboot && protect off $ubootaddr +$filesize && eras
e $ubootaddr +$filesize && cp.b $loadaddr $ubootaddr $filesize && protect on $ub
ootaddr +$filesize && cmp.b $loadaddr $ubootaddr $filesize
uboot=u-boot-P4080DS.bin
ubootaddr=0xeff80000
Environment size: 1909/8188 bytes
And attached is what I did for hypervisor deployment.
Thanks & Regards,
Peter
Hello Peter,
It reports the error "image size exceeds target window", please check whether you flashes uImage at the correct bank. Probably it's better to begin with the default two partitions hypervisor configuration to verify the environment setup.
Thanks,
Yiping
Hi Yiping,
Thank you for your reply. I have deployed the hypervisor at the alternative bank, would this be the issue? Also, for uImage, I have choose the following file, is this correct?
/build/tmp/deploy/images/uImage
Regards,
Peter