Run my own model on NPU

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Run my own model on NPU

1,367 Views
Hadrien
Contributor I

Hello to all,

I am new to using the IMX8, and I am trying to run my own acoustic source recognition model on the NPU.

I can run the provided Yokto network on CPU and NPU. However, my network seems to work only on CPU. When I run it with the -e option /usr/lib/libvx_delegate.so, a message appears "INFO: Created TensorFlow Lite XNNPACK delegate for CPU." The inference works, but I don't see any reduction in computation time compared to inference on CPU. So I deduce that the network still worked on CPU.

Do you know how to verify this hypothesis? And also how to force the execution on NPU?

Thanks in advance,

Translated with www.DeepL.com/Translator (free version)

Labels (1)
0 Kudos
Reply
0 Replies