I was using tflite model to run object detections and as per the documents I was run it using the libvx_delegate.so. But the warm-up time is very much high and for the videos the things work for each frame.
So is there any other ways to use the NPU on this device.
Hi @Vijay_hegde!
Thank you for contacting NXP Support!
The only way to use our NPU is using libvx_delegate.so.
You can refer to our i.MX Machine Learning User's Guide to lern more about it.
https://www.nxp.com/docs/en/user-guide/IMX-MACHINE-LEARNING-UG.pdf
Best Regards!
Chavira
Hi @Vijay_hegde!
how you are loading the external delegate in your .py?
You can check one of our examples to apply the proper chanegs in your program.
Best Regards!
Chavira
I have referred and loading external delegate using experimental_deligate = tflite.load_delegate().
And I am able to see it running on delegate. But for video its very slow.
Hi @Vijay_hegde!
To load the external delegate we use the next code:
# load external delegate
if args.ext_delegate is not None:
print('Loading external delegate from {} with args: {}'.format(
args.ext_delegate, ext_delegate_options))
ext_delegate = [
tflite.load_delegate(args.ext_delegate, ext_delegate_options)
]
interpreter = tflite.Interpreter(
model_path=args.model_file,
experimental_delegates=ext_delegate,
num_threads=args.num_threads)
interpreter.allocate_tensors()
You can check the list of supported operators in the link below:
https://github.com/nxp-imx/tflite-vx-delegate-imx/blob/lf-6.6.23_2.0.0/op_status.md
Best Regards!
Chavira