I was creating a custom dataset in eIQ, but I have not been able to interpret a valid output signature. I am starting with the object detection -balanced setting and have approximately 100 labeled images in my dataset.
After training and eval, I generate the output model and view it. It appears to be an identity of shape 1x1x65. This model was not quantized. Using the default 'label_image.py' resulted in an error complaining about list indices out of range.
So I modified with this snippet:
output_data = get_output_tensor(interpreter, 0)
xywh = output_data[..., :4]
conf = output_data[..., 4:5]
cls_data = output_data[..., 5:]
cls = cls_data.reshape(1,122040)
print("Classes: {}".format(cls))
print("Scores: {}".format(conf))
print("Boxes: {}".format(xywh))
output= np.squeeze(np.concatenate((xywh,conf,cls_data), axis=1))
However, the outputs don't really seem to make sense, as I am getting negative scores that seem large (-20, -30, -1.4)
So I am not sure how to export the model out of eIQ for tflite that will run against label_image.py. Am I missing quantization? Also, should I be converting to int8 for input and output if I want to run a trained NPU model?
For reference, I am running this on a NavQ+ (IMX8m plus) and I was able to successfully run the tflite grace hopper example utilizing the NPU (external delegates):
@hovergames3