Hello NXP Community,
I'm working on an object detection pipeline on the i.MX8MP platform, using NXP Linux BSP 5.15.52, with a quantized SSD-MobileNetV2 model and 480p video input. I’ve integrated the VX Delegate (libvx_delegate.so) into a TFLite runtime pipeline.
1. Current Setup:
Using OpenCV + TFLite with VX Delegate:
python
cap = cv2.VideoCapture(args.video)
if not cap.isOpened():
print(f"Failed to open video: {args.video}")
sys.exit(1)
interpreter = Interpreter(
model_path=args.model,
experimental_delegates=[load_delegate(DELEGATE_PATH)]
)
interpreter.allocate_tensors()
This works, but I observe lower FPS than expected.
2. Questions:
Performance:
Would building a full GStreamer-based pipeline (instead of OpenCV VideoCapture) improve real-time inference FPS when using TFLite + VX Delegate on i.MX8MP?
Model Quality (eIQ Toolkit vs TensorFlow CLI):
I noticed that models created using the eIQ Toolkit show lower performance (accuracy/latency) compared to models converted via TensorFlow command-line tools.
Is this expected? Are there differences in quantization strategy, postprocessing, or layer support between the two?
NXP Model Zoo:
Are there recommended pipelines or examples from the NXP Model Zoo for mobilenet_ssd_v2 that are optimized for VX Delegate on i.MX8MP BSP 5.15.52?
Any guidance, benchmarks, or sample pipelines (especially using GStreamer + TFLite + VX Delegate on i.MX8MP) would be highly appreciated!
Thanks in advance!