Hello,
I am using iMX93 with SDK Linux_6.1.55_2.2.0.
I am encountering an issue while running a TensorFlow Lite object detection model on a custom board with the iMX93 module. When I attempt to start inference with this model, iMX93 gets stuck.
Additionally, I have observed that the device consistently gets stuck whenever I access the NPU, even when using the inference_runner and interpreter_runner utilities provided by NXP.
Do you have any ideas or suggestions on what could be causing this problem?
Example usage:
interpreter = tflite.Interpreter(model_path="model.tflite",
experimental_delegates=[tflite.load_delegate("/usr/lib/libethosu_delegate.so")])
interpreter.allocate_tensors()
tflite.Interpreter works well, but allocate_tensors() makes the cpu gets stuck.
Thank you in advance!