Hi,
I am using P2041rdb and the fsl hypervisor. I have started two partitions running linux OS with the hv-2p-lnx-lnx.dtb provided and I wanted to access the OS through ssh. I have edited the interfaces file to provide a static ip and am able to ping the OS. However I am unable to connect to it via ssh.. I have checked the firewalls and the status of sshd. All seems to be ok.
I was wondering if there is anything that I need to change in the hypervisor config tree to enable me to connect via ethernet?
Thanks.
Solved! Go to Solution.
Hello,
Probably the best solution in your scenario is to share a fman port between the two partitions. In order to do so, you'll need to define a dpa-ethernet node as an "initialization manager" in the first partition (the one that owns fman0) and another dpa-ethernet node as a "virtual/shared controller", in the second partition. A configuration example can be found in the dpaa-ethernet driver documentation shipped with the QorIQ Linux SDK.
Hope This Helps,
Bogdan
May I know which SDK you are using for this? Meanwhile, can you please share your log?
Hi,
I am not using any SDK. I am directly modifying the hypervisor config tree from freescale public git and loading it through u-boot. An update, I am finally able ssh to partition 0 when fman0 and dpa-ethernet@0 to @4 are allocated.
However, simply allocating dpa-ethernet to a second partition does not allow me to ssh to that partition. I am also unable to find a way to reallocate fman0 or at least parts of it to the second partition. Perhaps you could tell me what allocations to the hypervisor config tree are needed to form an ethernet connection?
Unfortunately I am unable to provide the log because the development computer is not connected to the computer. I understand the increase in complexity that this would entail but I appreciate any help that can be provided. Thanks!
Hello,
Probably the best solution in your scenario is to share a fman port between the two partitions. In order to do so, you'll need to define a dpa-ethernet node as an "initialization manager" in the first partition (the one that owns fman0) and another dpa-ethernet node as a "virtual/shared controller", in the second partition. A configuration example can be found in the dpaa-ethernet driver documentation shipped with the QorIQ Linux SDK.
Hope This Helps,
Bogdan
Thanks for your help. I am trying to do that right now. Just one question, am I supposed to be able to access the virtual/shared controller in the second partition from external connections directly or do I have to go through the first partition. Because right now it seems that I can see the dpa-ethernet node in the first partition but not the node in the second partition.
Cheers
Hi,
Yes, that should be accessible directly - that's the very intent. If the node is configured correctly, you should be able to at least see the interface in the second partition, using "ifconfig -a".
The reason the first partition is involved in this process is because some fman0 resources are not partitioned/virtualized, so their effective ownership belongs to the first partition. This is why the "initialization manager" node is needed in the first partition's device tree.
There are a couple of configuration quirks you have to make, in order to divert ingress traffic to the second partition.
The first partition retains ownership of the controller's default ingress queue, so you need to define a different set of queues for the shared controller to use in the second partition. Those are specified in the second partition's device tree.
Finally, in order to actually direct traffic to your statically-defined queues in the second partition, you need to use the "fmc" tool from the SDK.
This whole configuration process is explained at a bit more length in the documentation (I believe the section is called "
Inter-partition MAC-less and Shared-MAC", under the "Virtual/Shared Controller" chapter).
Please let me know if/how it works.
Bogdan
Hi Bogdan,
I located the fmc folder and went through the steps inside the Documentation. However it results in an Invalide Engine Name. I have placed fm1-gb1 as the interface in hv2p_config_p4_shared_mac.xml .I have also tried to set the ethernet node as an initialization manager as shown on the Documentation but that resulted in partition 1 being unable to detect that ethernet node. I am not really sure right now what I am still missing.
Apart from the above, I was just wondering if there was a way to setup private ethernets for each partition instead of using Shared MAC.
Thanks for all your assistance. Really appreciate it.
Cheers
Hello,
> I located the fmc folder and went through the steps inside the Documentation. However it results in an Invalide Engine Name.
[BH] Could this be an index mismatch? (in the .xml, the fman indices start from 0)
> I have also tried to set the ethernet node as an initialization manager as shown on the Documentation but that resulted in partition 1 being unable to detect that ethernet node.
[BH] That would be the intended behaviour (assuming I understood your action). Declaring a dpa-ethernet node as an initialization manager in partition 1 merely initializes the fman port, but will not immediately make it visible as a netdevice in either partition. (What an initialization manager does is probing the non-partitionable fman resources on behalf of a different partition, which doesn't own those resources.)
In order to make it visible to partition 2 - as I understood you were trying to do - you need to declare a dpa-ethernet node in partition 2's device tree, with statically-defined frame queue ids and buffer pools.
> I was just wondering if there was a way to setup private ethernets for each partition instead of using Shared MAC.
[BH] I'm afraid that's not possible with the current SDK and with ports of the same fman as in your case. It's a limitation we are aware of. For the time being, I'll try to help you through this configuration exercise. It makes for good feedback to us as well.
Thank you,
Bogdan
Hi,
Yes I believe you were right about index mismatched.
In the config file, I have initialized it with fm0, port type 1G number 1 which I assumes corresponds to fm1-gb1 given in the P2041rdb example.
Within the policy file, I have changed the mac address at queue bases 0x210 and 0x220 as required under classification eth_dest_clsf. I have no idea how to add the ip address of the 2nd partition to arp_clsf as stated by the Documentation. Upon running fmc with the above changes, I received
ERR: Invocation of FM_PCD_KgSchemeSet for fm0/port/1G/1/dist/garbage_dist/direct failed
ERR: Invocation of FM_PORT_DeletePCD for fm0/port/1G/1 failed with error code 0x00130002
It also says on the hypervisor terminal
MAJOR FM-PCD_Error [CPU00, drivers/net/dpa/NetCommSw/Peripherals/FM/HC/hc.c:414 FmHcPcdKgSetScheme]: Resource Already Exists;
Scheme is already used
MINOR FM-PCD Error [CPU00, drivers/net/dpa/NetCommSw/src/wrapper/lnxwrp_ioctls_fm.c: LnxwrpFmPortIOCTL]: Invalid Selection;
IOCTL cmd (0x2000e15b}:(0xe1:0x5b)!
I have checked the MAC addresses to be changed and they are accurate. Do you perhaps have an idea of what is happening?
With regards to the ethernet node as an initialization manager, from what I understand from the Documentation, I should include the following in partition 1 correct?
ethernet@1 {
compatible = "fsl,p4080-dpa-ethernet-init",
"fsl,dpa-ethernet-init";
fsl,bman-buffer-pools = <&bp1>;
fsl,qman-frame-queues-rx = <10 1 11 1>;
fsl,qman-frame-queues-tx = <12 1 13 1>;
fsl,fman-mac = <&enet1>;
};
How then should I go about initializing this in partition 2? considering the frame queue ids and buffer pools are already declared in partition 1.
Once again thanks alot for your continued assistance. Really appreciate it.
Cheers
Hello,
maybe this is long time ago, but is there a solution available for your problem/question?
Best Regards
Daniel
Hi it was a version issue. Updating the kernel and root file system to the latest version did the trick for me.
Thanks for checking
Hi,
I would propose to rewind the discussion just a bit. I'e been trying to take some configuration shortcuts, but instead things turned out overly complicated for your scenario. In addition, I propose to move the thread on email, and eventually come back here to post the conclusions for future reference. Are you okay with that?
Thank you,
Bogdan