I have download yolov3 and yolov3-tiny models from ONNX zoo and running on i.MX8M Plus devkit from variscite. According to "i.MX Machine Learning User's Guide, Rev. L5.4.47_2.2.0 " these models are tested.
The issues is that running YOLOv3 or YOLOv3-Tiny on CPU is in fact faster than running on NPU. Can someone explain what am I doing wrong and why is that so? Thanks.
Running on NPU
onnx_test_runner -j 1 -c 1 -r 1 -e vsi_npu ./tiny-yolov3/
2020-12-09 22:30:50.157550091 [E:onnxruntime:Default, runner.cc:217 operator()] Test tiny-yolov3 finished in 0.402 seconds, took 0.402 for each input
Running on CPU
onnx_test_runner -j 1 -c 1 -r 1 -e cpu ./tiny-yolov3/
2020-12-09 22:32:08.197318388 [E:onnxruntime:Default, runner.cc:217 operator()] Test tiny-yolov3 finished in 0.367 seconds, took 0.367 for each input
I have noticed that running on NPU shows bunch of "unsupported node" messages
2020-12-09 22:30:49.538789386 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:Conv
2020-12-09 22:30:49.538949517 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:LeakyRelu
2020-12-09 22:30:49.538989393 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:MaxPool
2020-12-09 22:30:49.539026520 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:Conv
2020-12-09 22:30:49.539057396 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:LeakyRelu
2020-12-09 22:30:49.539090272 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:MaxPool
2020-12-09 22:30:49.539124273 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:Conv
2020-12-09 22:30:49.539154775 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:LeakyRelu
2020-12-09 22:30:49.539187026 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:MaxPool
2020-12-09 22:30:49.539221027 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:Conv
2020-12-09 22:30:49.539250903 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:LeakyRelu
2020-12-09 22:30:49.539282404 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:MaxPool
2020-12-09 22:30:49.539315906 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:Conv
2020-12-09 22:30:49.539345907 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:LeakyRelu
2020-12-09 22:30:49.539377283 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:MaxPool
2020-12-09 22:30:49.539410534 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:Conv
2020-12-09 22:30:49.539440160 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:LeakyRelu
2020-12-09 22:30:49.539472037 [W:onnxruntime:Default, vsi_npu_execution_provider.cc:174 GetUnsupportedNodeIndices] unsupported node:MaxPool
If I try to run using armnn as execution provider, I get an error.
onnx_test_runner -j 1 -c 1 -r 1 -e armnn ./tiny-yolov3/
2020-12-09 22:42:08.375386673 [E:onnxruntime:Default, runner.cc:224 RunTests] Test tiny-yolov3 failed:Node:PermuteNCHW_51 Output:input_50 [ShapeInferenceError] Can't merge shape info. Both source and target dimension have values but they differ. Source=256 Target=128 Dimension=1
result:
Models: 1
Total test cases: 1
Succeeded: 0
Not implemented: 0
Failed: 1
Got exception while running: 1
Stats by Operator type:
Not implemented(0):
Failed:
Failed Test Cases:tiny-yolov3 of unknown version
test tiny-yolov3 failed, please fix it
@manish_bajaj I created a project in NXP support. It says the Project Pattern is "39735651" and Project Name "Hands Down". The project creation wizard didn't ask me to enter any customer name.
@rajaz ,
Can you share more detail about your company name?
This community might not be correct place for your question to be answered.
-Manish