I had a YOLOv11 object detection model. I took the yolo11n.pt model and converted it to ONNX format. After that, I simplified the ONNX model using onnxsim. Then, using onnx2tf, I converted the model to the SavedModel (.pb) format. Subsequently, using NXP's eIQ Toolkit, I converted this .pb model to an INT8 quantized model. For this, I used the COCO128 dataset (100+ images) as the calibration dataset.
The model was successfully converted, but it does not detect any objects, whereas the INT8 model obtained using the Ultralytics export API produces correct detections. When I checked the internal weights of both models, they were different, and the model outputs for the same image also differed significantly.
Please help me convert the .pb model correctly so that detections happen as expected.