@Ramson
can you run your model through this tutorial: https://www.tensorflow.org/lite/tutorials/model_maker_object_detection ?
As first step before running your own model, I suggest hitting Run in Google Colab and then Runtime->Run All, it creates model.tflite file (it is for detecting 5 classes of food). Checking this reference model in eIQ Model Tool gives as follows:

So we can see confirmation of the info you have attached above, output tensor will have locations, classes, scores, number of detections https://www.tensorflow.org/lite/examples/object_detection/overview#output_signature
This info is used in notebook code:
# Get all outputs from the model
boxes = get_output_tensor(interpreter, 0)
classes = get_output_tensor(interpreter, 1)
scores = get_output_tensor(interpreter, 2)
count = int(get_output_tensor(interpreter, 3))
Now, import to Google Colab your model by adding this piece of code (by default is should go to in /content directory):
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
Afterwards, look at section (Optional) Test the TFLite model on your image and change few lines of code:
- change model path to your model path
- change classes to your classes
model_path = '/content/detection-performance-mcu-2021-09-01T13-55-40.335Z_mlir.tflite'
# model_path = 'model.tflite'
# Load the labels into a list
classes = ['Apple', 'Banana', 'Orange']
# classes = ['???'] * model.model_spec.config.num_classes
# label_map = model.model_spec.config.label_map
# for label_id, label_name in label_map.as_dict().items():
# classes[label_id-1] = label_name
And final changes here:
- search on the Internet example image you want to detect objects, in my case some fruits and update INPUT_IMAGE_URL
- DETECTION_THRESHOLD up to you
#@title Run object detection and show the detection results
INPUT_IMAGE_URL = "https://messalonskeehs.files.wordpress.com/2013/02/screen-shot-2013-02-06-at-10-50-37-pm.png" #@param {type:"string"}
# INPUT_IMAGE_URL = "https://storage.googleapis.com/cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg" #@param {type:"string"}
DETECTION_THRESHOLD = 0.6 #@param {type:"number"}
After these changes, in case of my model received after training and exporting from eIQ Portal (model is for detecting 3 categories of fruits, output signature is [1,1,8], I do not why these values), I am getting an error:
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-48-497a12c43559> in <module>()
20 TEMP_FILE,
21 interpreter,
---> 22 threshold=DETECTION_THRESHOLD
23 )
24
2 frames
<ipython-input-47-ce04f95b8354> in run_odt_and_draw_results(image_path, interpreter, threshold)
79
80 # Run object detection on the input image
---> 81 results = detect_objects(interpreter, preprocessed_image, threshold=threshold)
82
83 # Plot the detection results on the input image
<ipython-input-47-ce04f95b8354> in detect_objects(interpreter, image, threshold)
51 # Get all outputs from the model
52 boxes = get_output_tensor(interpreter, 0)
---> 53 classes = get_output_tensor(interpreter, 1)
54 scores = get_output_tensor(interpreter, 2)
55 count = int(get_output_tensor(interpreter, 3))
<ipython-input-47-ce04f95b8354> in get_output_tensor(interpreter, index)
38 def get_output_tensor(interpreter, index):
39 """Retur the output tensor at the given index."""
---> 40 output_details = interpreter.get_output_details()[index]
41 tensor = np.squeeze(interpreter.get_tensor(output_details['index']))
42 return tensor
IndexError: list index out of range
This comes after calling this:
classes = get_output_tensor(interpreter, 1)
we try to read output tensor at index which does not exist, we have only one called Identity, as far as can understand it properly.
So I am wondering now, how to handle object detection model received from eIQ Portal?