tensorflow lite using NPU acceleration in imx8mp with vanilla kernel and poky distribution

Showing results for 
Search instead for 
Did you mean: 

tensorflow lite using NPU acceleration in imx8mp with vanilla kernel and poky distribution

Contributor III

I would like to use NPU of imx8mp. I am not using the reference distro fsl-imx-wayland by NXP - we don't have a graphical system, just an embedded device with no other than ethernet output.

The reduce the footprint I use an adapted core-image-minimal poky distro. Was able to add the tensorflow-lite package. But what exactly is needed to get it accelerated by the build-in NPU? Is it the armnn-imx repo? Anything else to solve dependencies?

From the other post it seems like that in kernel gpu-viv is needed (and its part of devicetree). Is this right - even if computation is not done on GPU but NPU? At least I don't find a dedicated NPU kernel driver. Can you advise?

0 Kudos
2 Replies

NXP TechSupport
NXP TechSupport

Hello arno_0,


For machine learning and npu stuff:

eIQ is provided on a Yocto layer called meta-imx/meta-ml.

The eIQ software based on NXP BSP L5.4.70-2.3.1 also offer support for the following AI Frameworks which we will add instructions soon:

  • PyTorch 1.6.0
  • Arm Compute Library 20.02.01
  • Arm NN 20.01

All the AI Runtimes (except OpenCV, as documented on the i. MX Machine Learning User's Guide) provided by eIQ supports OpenVX (GPU/NPU) on its backend.

You can find more detailed information on the features of eIQ for each specific version on the i.MX Machine Learning User's Guide available on the NXP's Embedded Linux Documentation. See the version-specific information on the links in the table above.

You can also adapt the instructions to build on newer versions of BSP / meta-ml.

Git clone the meta-imx repository to your ~/yocto-ml-build/ directory:

$ git clone -b zeus-5.4.70-2.3.1 git://source.codeaurora.org/external/imx/meta-imx ~/yocto-ml-build/meta-imx

Copying the Recipes to your environment

First, create a layer named meta-ml, add it to your environment and remove the example recipe:

$ bitbake-layers create-layer ../layers/meta-ml
$ bitbake-layers add-layer ../layers/meta-ml
$ rm -rf ../layers/meta-ml/recipes-example

Copy the recipes from meta-imx to your layer.

$ cp -r ../../meta-imx/meta-ml/recipes-* ../layers/meta-ml/
$ cp -r ../../meta-imx/meta-ml/classes/ ../layers/meta-ml/
$ cp -r ../../meta-imx/meta-bsp/recipes-support/opencv ../layers/meta-ml/recipes-libraries/


0 Kudos

Contributor III

Thanks for this suggestion. For reference reasons (do have an EVK board) I have already cloned and build the complete NXP-reference-distro (5.10). So - I am not sure about it, but isn't that the same as just including

/home/user/y/imx-nxp-5.10.35/sources/meta-freescale \
/home/user/y/imx-nxp-5.10.35/sources/meta-imx/meta-ml \

into my projects bblayers.conf ?

The question for me is first: what exactly has to be build, if I just want to uses a NPU accelerated tensorflow-lite. My guess would be to add tensorflow-lite (of course) and according to the linked doc openvx with nn extensions? But what to bitbake for that? I think "bitbake tensorflow-lite" creates a not accelerated lib, doesn't it? I don't need opencv or python ...



0 Kudos