<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic eiq tflite model for imx8mplus in Other NXP Products</title>
    <link>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2172595#M30108</link>
    <description>&lt;P&gt;I'm currently working on converting a TensorFlow object detection model to a TensorFlow Lite format for inference on imx8mplus. I used the EIQ Toolkit for the conversion which is installed in ubuntu 20.04. The base model used is ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8.&lt;/P&gt;&lt;P&gt;Here are the steps I followed using the EIQ Model Tool:&lt;/P&gt;&lt;P&gt;Selected Model Tool in the EIQ GUI.&lt;/P&gt;&lt;P&gt;Loaded the TensorFlow saved_model.pb file of the model.&lt;/P&gt;&lt;P&gt;Converted the model to TensorFlow Lite (.tflite) using the eiq-converter-tflite.&lt;/P&gt;&lt;P&gt;Enabled quantization with the following settings:&lt;/P&gt;&lt;P&gt;Quantization type: Per Tensor&lt;/P&gt;&lt;P&gt;Input type: uint8&lt;/P&gt;&lt;P&gt;Output type: uint8&lt;/P&gt;&lt;P&gt;Quantization normalization: Unsigned&lt;/P&gt;&lt;P&gt;Calibration dataset: 10 samples from the COCO dataset&lt;/P&gt;&lt;P&gt;Provided a compatible labels.txt file.&lt;/P&gt;&lt;P&gt;After the conversion, when running inference with the TFLite model, I encountered the following error:&lt;BR /&gt;&lt;STRONG&gt;IndexError: list index out of range&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Post-processing Inference Code:&lt;/P&gt;&lt;P&gt;interpreter.invoke()&lt;BR /&gt;labels = load_labels(args.label_file)&lt;/P&gt;&lt;P&gt;scores = np.squeeze(interpreter.get_tensor(output_details[0]['index']))&lt;BR /&gt;boxes = np.squeeze(interpreter.get_tensor(output_details[1]['index'])[0])&lt;BR /&gt;num_detections = np.squeeze(interpreter.get_tensor(output_details[2]['index'])[0])&lt;BR /&gt;classes = np.squeeze(interpreter.get_tensor(output_details[3]['index'])[0])&lt;/P&gt;&lt;P&gt;for i in range(10):&lt;BR /&gt;if scores[i] &amp;gt; 0.7:&lt;BR /&gt;ymin, xmin, ymax, xmax = boxes[i]&lt;BR /&gt;xmin = int(xmin * w0)&lt;BR /&gt;ymin = int(ymin * h0)&lt;BR /&gt;xmax = int(xmax * w0)&lt;BR /&gt;ymax = int(ymax * h0)&lt;/P&gt;&lt;P&gt;class_id = classes[i]&lt;BR /&gt;cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), (0, 255, 0), 2)&lt;BR /&gt;label = f"{labels[class_id]} {scores[i]:.2f}"&lt;BR /&gt;cv2.putText(frame, label, (xmin, max(10, ymin - 5)), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 2)&lt;/P&gt;&lt;P&gt;Is there anything incorrect or missing in the conversion process or the inference/post-processing code that could be causing this IndexError?&lt;/P&gt;&lt;P&gt;I would appreciate any insights or suggestions on how to debug or fix this issue.&lt;/P&gt;&lt;P&gt;Thank you!&lt;/P&gt;</description>
    <pubDate>Fri, 19 Sep 2025 07:32:25 GMT</pubDate>
    <dc:creator>kamalesh</dc:creator>
    <dc:date>2025-09-19T07:32:25Z</dc:date>
    <item>
      <title>eiq tflite model for imx8mplus</title>
      <link>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2172595#M30108</link>
      <description>&lt;P&gt;I'm currently working on converting a TensorFlow object detection model to a TensorFlow Lite format for inference on imx8mplus. I used the EIQ Toolkit for the conversion which is installed in ubuntu 20.04. The base model used is ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8.&lt;/P&gt;&lt;P&gt;Here are the steps I followed using the EIQ Model Tool:&lt;/P&gt;&lt;P&gt;Selected Model Tool in the EIQ GUI.&lt;/P&gt;&lt;P&gt;Loaded the TensorFlow saved_model.pb file of the model.&lt;/P&gt;&lt;P&gt;Converted the model to TensorFlow Lite (.tflite) using the eiq-converter-tflite.&lt;/P&gt;&lt;P&gt;Enabled quantization with the following settings:&lt;/P&gt;&lt;P&gt;Quantization type: Per Tensor&lt;/P&gt;&lt;P&gt;Input type: uint8&lt;/P&gt;&lt;P&gt;Output type: uint8&lt;/P&gt;&lt;P&gt;Quantization normalization: Unsigned&lt;/P&gt;&lt;P&gt;Calibration dataset: 10 samples from the COCO dataset&lt;/P&gt;&lt;P&gt;Provided a compatible labels.txt file.&lt;/P&gt;&lt;P&gt;After the conversion, when running inference with the TFLite model, I encountered the following error:&lt;BR /&gt;&lt;STRONG&gt;IndexError: list index out of range&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Post-processing Inference Code:&lt;/P&gt;&lt;P&gt;interpreter.invoke()&lt;BR /&gt;labels = load_labels(args.label_file)&lt;/P&gt;&lt;P&gt;scores = np.squeeze(interpreter.get_tensor(output_details[0]['index']))&lt;BR /&gt;boxes = np.squeeze(interpreter.get_tensor(output_details[1]['index'])[0])&lt;BR /&gt;num_detections = np.squeeze(interpreter.get_tensor(output_details[2]['index'])[0])&lt;BR /&gt;classes = np.squeeze(interpreter.get_tensor(output_details[3]['index'])[0])&lt;/P&gt;&lt;P&gt;for i in range(10):&lt;BR /&gt;if scores[i] &amp;gt; 0.7:&lt;BR /&gt;ymin, xmin, ymax, xmax = boxes[i]&lt;BR /&gt;xmin = int(xmin * w0)&lt;BR /&gt;ymin = int(ymin * h0)&lt;BR /&gt;xmax = int(xmax * w0)&lt;BR /&gt;ymax = int(ymax * h0)&lt;/P&gt;&lt;P&gt;class_id = classes[i]&lt;BR /&gt;cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), (0, 255, 0), 2)&lt;BR /&gt;label = f"{labels[class_id]} {scores[i]:.2f}"&lt;BR /&gt;cv2.putText(frame, label, (xmin, max(10, ymin - 5)), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 2)&lt;/P&gt;&lt;P&gt;Is there anything incorrect or missing in the conversion process or the inference/post-processing code that could be causing this IndexError?&lt;/P&gt;&lt;P&gt;I would appreciate any insights or suggestions on how to debug or fix this issue.&lt;/P&gt;&lt;P&gt;Thank you!&lt;/P&gt;</description>
      <pubDate>Fri, 19 Sep 2025 07:32:25 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2172595#M30108</guid>
      <dc:creator>kamalesh</dc:creator>
      <dc:date>2025-09-19T07:32:25Z</dc:date>
    </item>
    <item>
      <title>Re: eiq tflite model for imx8mplus</title>
      <link>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2173218#M30133</link>
      <description>&lt;P&gt;Hi,&lt;BR /&gt;&lt;BR /&gt;This error is not about model, your code is trying to get&amp;nbsp;index which is out of range. Please check the code or you can find other reference like this:&amp;nbsp;&lt;A href="https://github.com/zafarRehan/object_detection_COCO" target="_blank"&gt;https://github.com/zafarRehan/object_detection_COCO&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Best Regards,&lt;BR /&gt;Zhiming&lt;/P&gt;</description>
      <pubDate>Mon, 22 Sep 2025 03:18:48 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2173218#M30133</guid>
      <dc:creator>Zhiming_Liu</dc:creator>
      <dc:date>2025-09-22T03:18:48Z</dc:date>
    </item>
    <item>
      <title>Re: eiq tflite model for imx8mplus</title>
      <link>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2173255#M30136</link>
      <description>&lt;P&gt;I understand that the IndexError is related to accessing indices that are not available. What I am trying to confirm is the structure of the outputs produced by the model after conversion with the EIQ Toolkit.&lt;/P&gt;&lt;P&gt;Could you please clarify:&lt;/P&gt;&lt;P&gt;What is the expected order and shape of the output tensors for SSD models after EIQ conversion?&lt;/P&gt;&lt;P&gt;Is there a reference example of the correct post-processing steps to use with EIQ-converted TFLite models?&lt;/P&gt;&lt;P&gt;Could the small calibration dataset used during quantization have any impact on the outputs?&lt;/P&gt;&lt;P&gt;This information would help me adjust my inference code accordingly. I would really appreciate your guidance on the correct way to handle the outputs from EIQ-converted models.&lt;/P&gt;</description>
      <pubDate>Mon, 22 Sep 2025 05:48:31 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2173255#M30136</guid>
      <dc:creator>kamalesh</dc:creator>
      <dc:date>2025-09-22T05:48:31Z</dc:date>
    </item>
    <item>
      <title>Re: eiq tflite model for imx8mplus</title>
      <link>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2173855#M30147</link>
      <description>&lt;P&gt;Hi,&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Quantization&amp;nbsp;will not affect the structure, you can find eiq model example from here.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/NXP/eiq-model-zoo/blob/main/tasks/vision/object-detection/ssdlite-mobilenetv2/example.py" target="_blank"&gt;https://github.com/NXP/eiq-model-zoo/blob/main/tasks/vision/object-detection/ssdlite-mobilenetv2/example.py&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Best Regards,&lt;BR /&gt;Zhiming&lt;/P&gt;</description>
      <pubDate>Tue, 23 Sep 2025 01:26:45 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2173855#M30147</guid>
      <dc:creator>Zhiming_Liu</dc:creator>
      <dc:date>2025-09-23T01:26:45Z</dc:date>
    </item>
    <item>
      <title>Re: eiq tflite model for imx8mplus</title>
      <link>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2177480#M30225</link>
      <description>&lt;P&gt;The reference link provided is based on the Tensorflow1 object detection model. However, I am specifically looking for the Tensorflow2 object detection model. it would be greatly helpful if you could provide the appropriate&amp;nbsp;guidance on converting a TensorFlow 2 object detection model to TensorFlow Lite, ensuring compatibility with the inference code.&lt;/P&gt;</description>
      <pubDate>Mon, 29 Sep 2025 13:15:45 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2177480#M30225</guid>
      <dc:creator>kamalesh</dc:creator>
      <dc:date>2025-09-29T13:15:45Z</dc:date>
    </item>
    <item>
      <title>Re: eiq tflite model for imx8mplus</title>
      <link>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2182544#M30294</link>
      <description>&lt;P&gt;Hi,&lt;BR /&gt;Please refer these existing tutorials&lt;/P&gt;
&lt;P&gt;&lt;A href="https://colab.research.google.com/github/marcin-ch/Object_Detection_SSD_MobilenetV3_TFLite/blob/main/Object_Detection_SSD_MobilenetV3_TFLite.ipynb" target="_blank"&gt;https://colab.research.google.com/github/marcin-ch/Object_Detection_SSD_MobilenetV3_TFLite/blob/main/Object_Detection_SSD_MobilenetV3_TFLite.ipynb&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/google-coral/tflite/blob/master/python/examples/detection/README.md" target="_blank"&gt;https://github.com/google-coral/tflite/blob/master/python/examples/detection/README.md&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/tensorflow/examples/blob/master/lite/examples/object_detection/raspberry_pi/README.md" target="_blank"&gt;https://github.com/tensorflow/examples/blob/master/lite/examples/object_detection/raspberry_pi/README.md&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Best Regards,&lt;BR /&gt;Zhiming&lt;/P&gt;</description>
      <pubDate>Thu, 09 Oct 2025 02:21:25 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2182544#M30294</guid>
      <dc:creator>Zhiming_Liu</dc:creator>
      <dc:date>2025-10-09T02:21:25Z</dc:date>
    </item>
    <item>
      <title>Re: eiq tflite model for imx8mplus</title>
      <link>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2183664#M30325</link>
      <description>&lt;P&gt;Thank you for sharing the links. I noticed a key difference between the NXP TFLite model and my model — specifically in their output structures. The NXP TFLite model produces only bounding boxes and class outputs, whereas my model provides bounding boxes, scores, classes, and the number of detections.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;NXP Tflite model output:&lt;BR /&gt;Outputs: [&lt;BR /&gt;{'name': 'concat', 'index': 299, 'shape': array([1, 1917, 1, 4], dtype=int32)},&lt;BR /&gt;{'name': 'concat_1', 'index': 300, 'shape': array([1, 1917, 91], dtype=int32)}&lt;BR /&gt;]&lt;/P&gt;&lt;P&gt;My model output:&lt;BR /&gt;Outputs: [&lt;BR /&gt;{'name': 'StatefulPartitionedCall:1', 'index': 339, 'shape': array([1, 10], dtype=int32)},&lt;BR /&gt;{'name': 'StatefulPartitionedCall:3', 'index': 337, 'shape': array([1, 10, 4], dtype=int32)},&lt;BR /&gt;{'name': 'StatefulPartitionedCall:0', 'index': 340, 'shape': array([1], dtype=int32)},&lt;BR /&gt;{'name': 'StatefulPartitionedCall:2', 'index': 338, 'shape': array([1, 10], dtype=int32)}&lt;BR /&gt;]&lt;BR /&gt;It would be greatly helpful if you could provide suggestions regarding the conversion aspect — particularly on how to align my model’s output structure to match that of the NXP TFLite model.&lt;/P&gt;</description>
      <pubDate>Fri, 10 Oct 2025 07:45:07 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2183664#M30325</guid>
      <dc:creator>kamalesh</dc:creator>
      <dc:date>2025-10-10T07:45:07Z</dc:date>
    </item>
    <item>
      <title>Re: eiq tflite model for imx8mplus</title>
      <link>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2184498#M30343</link>
      <description>&lt;P&gt;Hi,&lt;BR /&gt;&lt;BR /&gt;You can use onnx2tf to repalce eiq conversion and quantization, onnx2tf enables conversion and quantization under your complete control.&lt;BR /&gt;&lt;BR /&gt;Best Regards,&lt;BR /&gt;Zhiming&lt;/P&gt;</description>
      <pubDate>Mon, 13 Oct 2025 05:05:00 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Other-NXP-Products/eiq-tflite-model-for-imx8mplus/m-p/2184498#M30343</guid>
      <dc:creator>Zhiming_Liu</dc:creator>
      <dc:date>2025-10-13T05:05:00Z</dc:date>
    </item>
  </channel>
</rss>

