Hi guys,
I am trying to convert our YOLOv5 model to TFLite. My understanding is that to do that properly, I need to first build a fork of TFLite that is located here: https://github.com/nxp-imx/tensorflow-imx/tree/lf-6.6.23_2.0.0
The problem is that this is not building correctly due to large number of issues. My question is, if I only want to convert my model to TFLite, do I still need to use custom version of tensorflow or can I use the official branch? Does the converter from the for contains some special code tailored to the i.MX?
Additional question: why there is no possibility to ask for help on the gir repo itself? There is no issues tab like it is usually for other git repositories.
Thanks for any help.
Hi
To convert model, you can try eIQ Toolkit.
The tensorflow-imx is used to build tf libraries.
Best Regards
Zhiming
Hello @Zhiming_Liu ,
so I've installed eIQ Toolkit on my Ubuntu 22.04.
This is what I get when executing eiq portal:
wodzu@wodzu-Legion-Pro-7-16IRX8H:/opt/nxp/eIQ_Toolkit_v1.12.1$ ./eiq-portal
eIQ Portal version 2.12.2
------------------------------------------------
/opt/nxp/eIQ_Toolkit_v1.12.1/resources/app.asar
------------------------------------------------
Launch -> /opt/nxp/eIQ_Toolkit_v1.12.1
is-elevated: false
13:23:55.259 › Launching Application
libva error: /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so init failed
Display size is 2560x1600
[2024-09-01T11:23:55.386Z] ExtensionHostController running as pid 48446
[2024-09-01T11:23:55.646Z] Socket.io logging server connected to socket dkltRu4blr5ohar3AAAB
(buffered) [2024-09-01T11:23:55.540Z] ExtensionHost running as pid 48597
libva error: /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so init failed
13:23:55.741 › [CONVERTER] Traceback (most recent call last):
File "/opt/nxp/eIQ_Toolkit_v1.12.1/python/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/opt/nxp/eIQ_Toolkit_v1.12.1/python/lib/python3.10/runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "/opt/nxp/eIQ_Toolkit_v1.12.1/python/lib/python3.10/site-packages/eiq/modelserver/__init__.py", line 4, in <module>
import tensorflow as tf
File "/opt/nxp/eIQ_Toolkit_v1.12.1/python/lib/python3.10/site-packages/tensorflow/__init__.py", line 48, in <module>
from tensorflow._api.v2 import __internal__
File "/opt/nxp/eIQ_Toolkit_v1.12.1/python/lib/python3.10/site-packages/tensorflow/_api/v2/__internal__/__init__.py", line 8, in <module>
from tensorflow._api.v2.__internal__ import autograph
File "/opt/nxp/eIQ_Toolkit_v1.12.1/python/lib/python3.10/site-packages/tensorflow/_api/v2/__internal__/autograph/__init__.py", line 8, in <module>
from tensorflow.python.autograph.core.ag_ctx import control_status_ctx # line: 34
File "/opt/nxp/eIQ_Toolkit_v1.12.1/python/lib/python3.10/site-packages/tensorflow/python/autograph/core/ag_ctx.py", line 21, in <module>
13:23:55.742 › [CONVERTER] from tensorflow.python.autograph.utils import ag_logging
File "/opt/nxp/eIQ_Toolkit_v1.12.1/python/lib/python3.10/site-packages/tensorflow/python/autograph/utils/__init__.py", line 17, in <module>
from tensorflow.python.autograph.utils.context_managers import control_dependency_on_returns
File "/opt/nxp/eIQ_Toolkit_v1.12.1/python/lib/python3.10/site-packages/tensorflow/python/autograph/utils/context_managers.py", line 19, in <module>
from tensorflow.python.framework import ops
File "/opt/nxp/eIQ_Toolkit_v1.12.1/python/lib/python3.10/site-packages/tensorflow/python/framework/ops.py", line 40, in <module>
from tensorflow.python import pywrap_tensorflow
File "/opt/nxp/eIQ_Toolkit_v1.12.1/python/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow.py", line 17, in <module>
import ctypes
File "/opt/nxp/eIQ_Toolkit_v1.12.1/python/lib/python3.10/ctypes/__init__.py", line 8, in <module>
from _ctypes import Union, Structure, Array
ImportError: libffi.so.7: cannot open shared object file: No such file or directory
13:23:55.766 › eiq-converter terminated
I appreciate your answer although you really did not answer my question. So let me rephrase it:
1. What do I need to do, to get help in building the tensorflow fork that is used by i.MX 8M Plus processor? Do I need to issue a support ticket?
2. What do I need to do, to get help in compiling YOCTO?
Both libraries are currently not compiling due to number of issues which is unacceptable. I should be developing my AI model, but instead I am spending days trying to compile these libraries.
Just tell me, who can help me. Thank you.
Hi
All necessary libraries are in demo images or Yocto imx-image-full images. You have to use imx-image-full image. So you don't need to compile them by yourself. To convet model , NXP provide model tool in eIQ Toolkit to help customer convert model(.pb/.onnx) to .tflite, please install eIQ and refer eIQ User Guide.
To run model on board, please refer 2 TensorFlow Lite in IMX-MACHINE-LEARNING-UG.pdf
https://www.nxp.com.cn/docs/en/user-guide/IMX-MACHINE-LEARNING-UG.pdf
Best Regards
Zhiming
All necessary libraries are in demo images or Yocto imx-image-full images. You have to use imx-image-full image. So you don't need to compile them by yourself.
Could please point me to the repository containing Yocto imx-image-full images?
Hi
Prebuilt images:https://www.nxp.com/design/design-center/software/embedded-software/i-mx-software/embedded-linux-for...
If you want to compile images, please refer https://www.nxp.com/docs/en/user-guide/IMX_YOCTO_PROJECT_USERS_GUIDE.pdf
Best Regards
Zhiming
BTW: I also tried to install this through YOCTO which does not seem to be the best way (250GB of free space required), but YOCTO is also not building because some repository addresses got changed...