Can install tensorflow for python3 on imx8MM EVK board?

cancel
Showing results for 
Search instead for 
Did you mean: 

Can install tensorflow for python3 on imx8MM EVK board?

718 Views
Contributor III

Hello community.
I need to install tensorflow for python3 run on the imx8MM EVK board.
After reference the document about NXP eIQ™ Machine Learning at: "https://www.nxp.com/docs/en/nxp/user-guides/UM11226.pdf" , I was installed tensorflow, running benchmark and building example from sources successfully.

But "python3 import tensorflow as tf" had the error:
root@imx8mmevk:~/bazel# python3
Python 3.5.5 (default, Nov 6 2019, 02:53:57)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'tensorflow'

I tried "pip3 install tensorflow" but failed.
root@imx8mmevk:~/bazel# pip3 install tensorflow
ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none)
ERROR: No matching distribution found for tensorflow

I also tried to install tensorflow through bazel as shown in the link below but it failed at build bazel.
https://github.com/samjabrahams/tensorflow-on-raspberry-pi/blob/master/GUIDE.md

ERROR: Could not build Bazel

Let me ask is there any way for tensorflow to work on python3?
Thank so much.

Labels (1)
Tags (2)
4 Replies

75 Views
NXP Employee
NXP Employee

Hi,

eIQ only supports the C++ API for tflite. Support to build a custom CPP app with tensorflow will be available in the next yocto bsp release. There is currently no eIQ support for the python API for tensorflow nor tflite.

Is python tensorflow a must on your side? For running the inference, tflite is better suited for embedded. You can cross compile a custom CPP app with eIQ tflite support in the yocto toolchain.

If you need the tensorflow python API just to create or adjust a model, this can be done on a HOST PC, with pure tensorflow python API, no need for eIQ for this step. It is recommended to use the same version used in eIQ (1.12) to avoid any compatibility issues when running the inference.

Regards,

Raluca

75 Views
Contributor III

Thanks for your help.
I have no knowledge of Deep Learning Training or Inference. My task in project is to prepare python tensorflow development environment for imx8mm board.
Excuse me, but could you explain more about:
"If you need the tensorflow python API just to create or adjust a model, this can be done on a HOST PC, with pure tensorflow python API, no need for eIQ for this step. It is recommended to use the same version used in eIQ (1.12) to avoid any compatibility issues when running the inference. "
Does that mean I can training model on a host PC (like ubuntu) with python tensofflow 1.12 and then run inference with trained model on imx8mm EVK board? In this case, app for run inference must be written in C ++ using tensorflow library builded from eIQ, right?
Regards

0 Kudos

75 Views
NXP Employee
NXP Employee

Hi,

Exactly, training can be done on the host PC (like ubuntu) with python tensorflow 1.12.

Depending on what you wan to achieve with your application, there is also the option to use a pre-trained model: Tensorflow Models ; TensorFlow Lite models . Or you can start with a pre-trained model and use transfer learning to specialize it for your use case - check this community post: https://community.nxp.com/docs/DOC-343848

eIQ handles the Inference. For embedded Systems, it is recommended to convert the tensorflow model to tflite. There are several ways to do this:
- Quantization aware training: https://github.com/tensorflow/tensorflow/tree/r1.12/tensorflow/contrib/quantize

- Convert to tflite and quantize model post training - check this post: https://community.nxp.com/community/eiq/blog/2019/07/15/eiq-sample-apps-tflite-quantization

- Convert to tflite post training without quantization (set converter.post_training_quantize = False in the previous post)

To run tflite Inference with eIQ on the board, there are currently two options:
- TFLite runtime. See ref manual 7.2.2. Building example from sources.
- ARMNN runtime. See ref manual 8.2. Using Arm NN in a custom C/C++ application

This isn't a hard rule, but in most of the cases you will get the best performance with quantization aware training and running inference with ARMNN. 

Regards,

Raluca

75 Views
NXP Apps Support
NXP Apps Support

nxa06357‌, can you please help here?