SW: imx-yocto-5.4.70-2.3.0
HW: i.MX8M Plus
We built an image segmentation model using onnx.
The average inference time is 2.77394s on CPU per frame. The performance drops to 21.6163s if VSI NPU EP is used instead.
Attached the files:
- main.cpp: inference on CPU
- main_npu.cpp: inference on NPU
- video.mp4: test video
- model.onnx: image segmentation model
Dears
Can you tell me how was going on?
The same issue is raised by my cusotmer.
Should they build the code by enabling the execution provider before the cpu/gpu/npu test respectivly?
Regards,
Jack
what's your test steps? how did you enable NPU, pls send the steps to me, let me reproduce this on my imx8mp board
Just build the code and run it. It reads video file and performs segmentation.
As for enabling NPU, it is to add VSI NPU EP which list in IMX-MACHINE-LEARNING-UG.pdf. You can refer to my main_npu.cpp.
do you mean NPU performance is worse than CPU? it seems that vsi npu calls imx8mp gpu, I have a table about gpu and npu:
You can reproduce it to see the issue.
send me the detailed your test steps to reproduce this