YOLO tflite code Request to run on IMX8 Board C++

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

YOLO tflite code Request to run on IMX8 Board C++

1,074 次查看
wamiqraza
Contributor I

Hi all,

Can someone provide a complete C++ code of tflite deployment on board. I am having problem to find the code also I can't find on yolo repository for tflite model. I wrote but it has a lot of errors jso it is not the correct version I think.


Thanks in advance

0 项奖励
5 回复数

1,045 次查看
brian14
NXP TechSupport
NXP TechSupport

Hi @wamiqraza

You can use the following code as an example:
tensorflow-imx/tensorflow/lite/examples/label_image/label_image.cc at lf-6.1.36_2.1.0 · nxp-imx/tens...

I hope this information will be helpful.

Have a great day!

0 项奖励

1,024 次查看
wamiqraza
Contributor I

@brian14Thank you for the reference. I have also gone through documentation of NXP and was able to deploy MobileNetV2 pre-trained, MobileNetV2 trained on custom dataset bot int8 quantized and then yolov8 on custom dataset float32 quantized and int8 respectively. I am using GStreamer pipeline, as its for production project and some details will share here. Due to privacy reason full code I can't disclose in public. Would request for meeting if the team can contact via my email: wamiq.raza@kineton.it

As the product is about to launch and one of the barrier we are facing in detection. For instance model has low FPS and not utilizing GPU.

 

Here is the part of code that load the model:

std::unique_ptr<tflite::Interpreter> interpreter;
// ================== Load model ==================
std::unique_ptr<tflite::FlatBufferModel> model =
tflite::FlatBufferModel::BuildFromFile(modelPath.c_str());

std::cout << std::endl << "Model Loaded!" << std::endl << std::endl;

TFLITE_MINIMAL_CHECK(model != nullptr);
// ================== Define Interpreter ==================
tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
TFLITE_MINIMAL_CHECK(interpreter != nullptr);
// ================== Delegating GPU ==================
TfLiteDelegatePtr ptr = CreateTfLiteDelegate();
TFLITE_MINIMAL_CHECK(interpreter->ModifyGraphWithDelegate(std::move(ptr)) ==
kTfLiteOk);
// ================== Allocate tensor buffers ==================
TFLITE_MINIMAL_CHECK(interpreter->AllocateTensors() == kTfLiteOk);

 

Below are the terminal print for when loading MobileNetV2 for inference.

Streams opened successfully!
Vx delegate: allowed_cache_mode set to 0.
Vx delegate: allowed_builtin_code set to 0.
Vx delegate: error_during_init set to 0.
Vx delegate: error_during_prepare set to 0.
Vx delegate: error_during_invoke set to 0.
ERROR: Fallback unsupported op 32 to TfLite
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.

 

Below are the terminal print for when loading Yolov8 for inference.

Streams opened successfully!
Vx delegate: allowed_cache_mode set to 0.
Vx delegate: allowed_builtin_code set to 0.
Vx delegate: error_during_init set to 0.
Vx delegate: error_during_prepare set to 0.
Vx delegate: error_during_invoke set to 0.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.
W [HandleLayoutInfer:268]Op 162: default layout inference pass.

---------------------------------------------------------------------------------------

 

Would like to arrange meeting with deep learning model deployment team and suggestion on above details.

Please let me know if you need additional details of information.

0 项奖励

1,007 次查看
brian14
NXP TechSupport
NXP TechSupport

Hi @wamiqraza

Thank you for your reply.

To analyze in more detail the problem I would like to check your model. It seems that is not correctly optimized for the i.MX processors.
Do you trained your model using eIQ Toolkit?

On other hand, for a meeting and personalized support you can verify this link:
Professional Support for Processors and Microcontrollers | NXP Semiconductors

Have a great day!

0 项奖励

991 次查看
wamiq_raza
Contributor I

Hi @brian14,

Thank you for the reply.

No I didn't use model using eIQ Toolkit. I have trained and converted yolov8 on custom dataset using their GitHub page here is for your reference the link and as well as quantized model.

I have check eIQ Toolkit and can't find a model for yolov8 or v5 built in as it has mobileNet and Yolov4.

https://github.com/ultralytics/ultralytic

For Quantization command:

yolo export model='best.pt' format=tflite int8=True
0 项奖励

950 次查看
brian14
NXP TechSupport
NXP TechSupport

Hi @wamiq_raza

Thank you for your reply.

I will check your model, and then give you some recommendations.

Have a great day!

0 项奖励