We are trying to run MobileNet_v2 model trained on custom dataset for object detection. Generated 'saved_ model.pb' by using export_tflite_graph_tf2.py from TF2-ODAPI
Quantization is done using tflite converter (script attached) method with int8 data type. When the output quantized .tflite is inferenced on IMX 8M Plus using libvx_delegate, it falls back to CPU.
The details of env/frameworks used:
1. Tensorflow : 2.10.0
2. TF Lite : 2.10.0
3. Python: 3.8.10
Following combination of input, weight and output were tested:
1. uint8, int8, uint8
2. uint8, int8, float32
3. float32, int8, float32
We were not able to convert weights to uint8 data type as in this model (works with high FPS)
TF Lite runtime on IMX is 2.8.0
For test inferencing gstreamer pipeline from IMX Machine Learning guide is used.
gst-launch-1.0 v4l2src device=/dev/video3 ! video/x-raw,width=640,height=480,framerate=30/1 ! tee name=t t. ! queue max-size-buffers=2 leaky=2 ! imxvideoconvert_g2d ! video/x-raw,width=300,height=300,format=RGBA ! videoconvert ! video/x-raw,format=RGB ! tensor_converter ! tensor_filter silent=false framework=tensorflow2-lite model=./custom_mobilenet_v2.tflite accelerator=true:npu custom=Delegate:External,ExtDelegateLib:libvx_delegate.so ! tensor_decoder silent=false mode=bounding_boxes option1=tf-ssd option2=./custom_labels.txt option3=0:1:2:3,50 option4=640:480 option5=300:300 ! mix. t. ! queue max-size-buffers=2 ! imxcompositor_g2d name=mix sink_0::zorder=2 sink_1::zorder=1 ! autovideosink
how can we dump the debug log in gstreamer ?
Hi @abhishek_ml
Refer the 10.1.2 Profiling on hardware accelerators in ML guide: https://www.nxp.com/docs/en/user-guide/IMX-MACHINE-LEARNING-UG.pdf
Hi @abhishek_ml
Please attach the debug log.