tensorflow lite using NPU acceleration in imx8mp with vanilla kernel and poky distribution

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

tensorflow lite using NPU acceleration in imx8mp with vanilla kernel and poky distribution

1,894 Views
arno_0
Contributor III

I would like to use NPU of imx8mp. I am not using the reference distro fsl-imx-wayland by NXP - we don't have a graphical system, just an embedded device with no other than ethernet output.

The reduce the footprint I use an adapted core-image-minimal poky distro. Was able to add the tensorflow-lite package. But what exactly is needed to get it accelerated by the build-in NPU? Is it the armnn-imx repo? Anything else to solve dependencies?

From the other post it seems like that in kernel gpu-viv is needed (and its part of devicetree). Is this right - even if computation is not done on GPU but NPU? At least I don't find a dedicated NPU kernel driver. Can you advise?

0 Kudos
3 Replies

847 Views
arno_0
Contributor III

Sorry for long time no post. I get it running by adding the layers. The thing is: The tensorflow-lite recipe has quite a lot of depencencies and will install a lot.

Is this really all needed? I don't use python at all as example. Is it needed to build?

DEPENDS = "flatbuffers python3-numpy-native python3-pip-native python3-pybind11-native python3-wheel-native unzip-native \
python3 tensorflow-protobuf jpeg zlib ${BPN}-host-tools-native"

 

RDEPENDS:${PN} = " \
python3 \
python3-numpy \
${RDEPENDS_OPENVX} \
"

How to strip it down? I finally want to have it used by C++ interface only, but accelerated. So I need

tensorflow-lite-vx-delage, that needs tim-vx, that needs imx-gpu-viv and nn-imx (really???) ...

Another thing. My system is a kirkstone (because of LTS), but want to used 6.1 kernel and latest 2.10 tensorflow. I almost get my Yocto to be compiled but it ends up in some meson (!?!?) incompatibility. I don't need that at all. That is another reason why I want to strip it down. Not just not install things, as compiling also needs much time and more dependencies comes into play. Thank you.

0 Kudos

1,891 Views
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello arno_0,

 

For machine learning and npu stuff:

eIQ is provided on a Yocto layer called meta-imx/meta-ml.

The eIQ software based on NXP BSP L5.4.70-2.3.1 also offer support for the following AI Frameworks which we will add instructions soon:

  • PyTorch 1.6.0
  • Arm Compute Library 20.02.01
  • Arm NN 20.01

All the AI Runtimes (except OpenCV, as documented on the i. MX Machine Learning User's Guide) provided by eIQ supports OpenVX (GPU/NPU) on its backend.

You can find more detailed information on the features of eIQ for each specific version on the i.MX Machine Learning User's Guide available on the NXP's Embedded Linux Documentation. See the version-specific information on the links in the table above.

You can also adapt the instructions to build on newer versions of BSP / meta-ml.

Git clone the meta-imx repository to your ~/yocto-ml-build/ directory:

$ git clone -b zeus-5.4.70-2.3.1 git://source.codeaurora.org/external/imx/meta-imx ~/yocto-ml-build/meta-imx

Copying the Recipes to your environment

First, create a layer named meta-ml, add it to your environment and remove the example recipe:

$ bitbake-layers create-layer ../layers/meta-ml
$ bitbake-layers add-layer ../layers/meta-ml
$ rm -rf ../layers/meta-ml/recipes-example

Copy the recipes from meta-imx to your layer.

$ cp -r ../../meta-imx/meta-ml/recipes-* ../layers/meta-ml/
$ cp -r ../../meta-imx/meta-ml/classes/ ../layers/meta-ml/
$ cp -r ../../meta-imx/meta-bsp/recipes-support/opencv ../layers/meta-ml/recipes-libraries/

Regards

0 Kudos

1,887 Views
arno_0
Contributor III

Thanks for this suggestion. For reference reasons (do have an EVK board) I have already cloned and build the complete NXP-reference-distro (5.10). So - I am not sure about it, but isn't that the same as just including

/home/user/y/imx-nxp-5.10.35/sources/meta-freescale \
/home/user/y/imx-nxp-5.10.35/sources/meta-imx/meta-ml \

into my projects bblayers.conf ?

The question for me is first: what exactly has to be build, if I just want to uses a NPU accelerated tensorflow-lite. My guess would be to add tensorflow-lite (of course) and according to the linked doc openvx with nn extensions? But what to bitbake for that? I think "bitbake tensorflow-lite" creates a not accelerated lib, doesn't it? I don't need opencv or python ...

 

 

0 Kudos