I'm facing challenges with TensorFlow Lite for Microcontrollers (TFLM) when implementing object detection on the i.MX RT1060 platform. I'm utilizing the TensorFlow Object Detection API and working with the SSD MobileNet V2 FPNLite 320x320 model.
The model has an input tensor of shape (1, 320, 320, 3). After converting it to TFLite format (input uint8, output float32) and generating a header file using xxd, I flashed it onto the i.MX RT1060 board. While the input dimensions appear to be correct, I'm encountering issues with reading the dimensions and data from the output tensor.
The output tensor is expected to have four components:
However, it seems that the output tensor is not properly capturing the dimensions and data, leading to unexpected behavior.
I'd appreciate any insights or suggestions on how to address this discrepancy between the expected and observed output tensor behavior. Thank you for your assistance!
Perhaps it's because the TFLM all_ops_resolver does not support TFLite_Detection_PostProcess
Hi @COCOKAPPA ,
We provide an app note on this topic, please kindly refer to https://www.nxp.com.cn/docs/en/application-note/AN12766.pdf for details.
Hope that helps,
Have a great day,
Kan
-------------------------------------------------------------------------------
Note:
- If this post answers your question, please click the "Mark Correct" button. Thank you!
- We are following threads for 7 weeks after the last post, later replies are ignored
Please open a new thread and refer to the closed one, if you have a related question at a later point in time.
-------------------------------------------------------------------------------