Hello team,
I have a custom TfLite model for object detection. I want to run the inference using the same from imx8MPlus board.
The Python script I have written is performing the inference with the default delegate - "XNNPACK delegate for CPU."
I wish to use NPU for running the inference from the board. I tried changing the delegate to libvx_delegate in the tflite.Interpreter in my script as shown :
delegate = tflite.load_delegate('/usr/lib/libvx_delegate.so')
ModelInterpreter = tflite.Interpreter(model_path=ModelPath,experimental_delegates=[delegate])
However, when I printed the delegate I used, it is showing as :
Delegate used : <tflite_runtime.interpreter.Delegate object at 0xffff70472390>
What should I change/ add in my Python script so that my script will use NPU for inference?
Thanks in advance!