YOLO/MobileNetV2 tflite code GPU accelaration IMX8 Board C++

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

YOLO/MobileNetV2 tflite code GPU accelaration IMX8 Board C++

384 次查看
wamiqraza
Contributor I

This ticket is continuation from ticket: https://community.nxp.com/t5/i-MX-Processors/YOLO-tflite-code-Request-to-run-on-IMX8-Board-C/m-p/176...
--------------------------------------------------------------------------------------------
I have  gone through documentation of NXP and was able to deploy MobileNetV2 pre-trained, MobileNetV2 trained on custom dataset bot int8 quantized and then yolov8 on custom dataset float32 quantized and int8 respectively.

I am using GStreamer pipeline, as its for production project and some details will share here. Due to privacy reason full code I can't disclose in public. Would request for meeting if the team can contact via my email: wamiq.raza@kineton.it

As the product is about to launch and one of the barrier we are facing in detection. For instance model has low FPS and not utilizing GPU.

Below are the terminal print for when loading MobileNetV2 for inference.
Streams opened successfully!
Vx delegate: allowed_cache_mode set to 0.
Vx delegate: allowed_builtin_code set to 0.
Vx delegate: error_during_init set to 0.
Vx delegate: error_during_prepare set to 0.
Vx delegate: error_during_invoke set to 0.
ERROR: Fallback unsupported op 32 to TfLite
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
 
Below are the terminal print for when loading Yolov8 for inference.
Streams opened successfully!
Vx delegate: allowed_cache_mode set to 0.
Vx delegate: allowed_builtin_code set to 0.
Vx delegate: error_during_init set to 0.
Vx delegate: error_during_prepare set to 0.
Vx delegate: error_during_invoke set to 0.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.


Here is the part of code that load the model:

std::unique_ptr<tflite::Interpreter> interpreter;
// ================== Load model ==================
std::unique_ptr<tflite::FlatBufferModel> model =
tflite::FlatBufferModel::BuildFromFile(modelPath.c_str());

std::cout << std::endl << "Model Loaded!" << std::endl << std::endl;

TFLITE_MINIMAL_CHECK(model != nullptr);
// ================== Define Interpreter ==================
tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
TFLITE_MINIMAL_CHECK(interpreter != nullptr);
// ================== Delegating GPU ==================
TfLiteDelegatePtr ptr = CreateTfLiteDelegate();
TFLITE_MINIMAL_CHECK(interpreter->ModifyGraphWithDelegate(std::move(ptr)) ==
kTfLiteOk);
// ================== Allocate tensor buffers ==================
TFLITE_MINIMAL_CHECK(interpreter->AllocateTensors() == kTfLiteOk);

---------------------------------------------------------------------------------------
 
Would like to arrange meeting with deep learning model deployment team and suggestion on above details.

Please let me know if you need additional details of information.

0 项奖励
回复
1 回复

359 次查看
brian14
NXP TechSupport
NXP TechSupport

Hi @wamiqraza,

I think that we can continue on your other thread. 

Have a great day! 

0 项奖励
回复