AnsweredAssumed Answered

SSD MobileNet Inference using ArmNN on i.MX8qmmek

Question asked by Ullas Bharadwaj on Jun 24, 2020
Latest reply on Jul 10, 2020 by Manish Bajaj

Hello Community,

 

I am using i.MX8qmmek with BSP 5.4.3_2.0.0. I have my custom C++ application for running inference using TfLite and OpenCV. The appliaction with TfLite was able to use the GPU acceleration. Now, I would like to use ArmNN as my inference engine.

 

However, the Linux user guide does not suggest the example with SSD mobilenet inference. When I tried using the TfLiteMobileNetSsd-Armnn demo appliaction, i get the following error 

 

 

ArmNN v20190800
Failed to parse operator #0 within subgraph #0 error: Operator not supported. subgraph:0 operator:0 opcode_index:3 opcode:6 / DEQUANTIZE at function ParseUnsupportedOperator [/usr/src/debug/armnn/19.08-]
Armnn Error: Buffer #176 has 0 bytes. For tensor: [1,300,300,3] expecting: 1080000 bytes and 270000 elements. at function CreateConstTensor [/usr/src/debug/armnn/19.08-r1/git/src/armnnTfLiteParser/TfLit]

 

So, is it possible to run any of SSD MobileNet models using ARM NN on GPU? Is there a sample code to do that?

 

Best Regards

Ullas Bharadwaj

Outcomes