Kernel Panic with VPP

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

Kernel Panic with VPP

809 次查看
tdelgrande
Contributor I

Greetings,

I'm testing VPP with DPDK plugin on LS1043 custom board.

Increasing the number of workers in the cpu section of vpp.conf causes a kernel panic when traffic is traversing the device. When testing the dpdk-l3fwd with different cores attached to the ports, this problem does not arise.

Unable to handle kernel execute from non-executable memory at virtual address ffffd791378bd948

Kernel panic - not syncing: Oops: Fatal exception in interrupt

Has anyone been through this?

Thanks in advance,

Thomas

 

0 项奖励
回复
3 回复数

748 次查看
LFGP
NXP TechSupport
NXP TechSupport

dear @tdelgrande ,

regarding your very last question:

>> Is this the limitation you mentioned? answ: Yes, that's what I mean.

sorry for not being clear in my previous message

BR

LFGP

0 项奖励
回复

774 次查看
LFGP
NXP TechSupport
NXP TechSupport

Dear @tdelgrande ,

there is some limitations when the device has  DPAA1 hardware. 1, 2, maybe 4 (some cases), workers per port only.

please let me know the results.

BR

LFGP

0 项奖励
回复

765 次查看
tdelgrande
Contributor I

Dear @LFGP,

Thanks for the quick reply.

My understanding is that I'm not using more than one worker per port.

For VPP configuration that causes the kernel panic is this:

cpu {
main-core 1
corelist-workers 2-3
}

This seems to distribute automatically some ports to worker 0 and others to workers 1.

vpp# show interface
rx-placement secondary-mac-address span tx-hash
vpp# show interface rx-placement
Thread 1 (vpp_wk_0):
node dpdk-input:
GigabitEthernet0 queue 0 (polling)
GigabitEthernet2 queue 0 (polling)
GigabitEthernet4 queue 0 (polling)
GigabitEthernet6 queue 0 (polling)
Thread 2 (vpp_wk_1):
GigabitEthernet1 queue 0 (polling)
GigabitEthernet3 queue 0 (polling)
GigabitEthernet5 queue 0 (polling)

If I keep 'corelist-workers 2' (only one worker for all ports), everything works fine but with the performance penalty.

Is this the limitation you mentioned?

Thanks in advance!

 

0 项奖励
回复