Can we run tensorflow or any other framework for neural networks based application on imx6q ?

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 
已解决

Can we run tensorflow or any other framework for neural networks based application on imx6q ?

跳至解决方案
7,519 次查看
peteramond
Contributor V

Hi All,

My custom hardware is based on imax6q processor with 2GB memory which is most similar to nitrogen6 max development board.

Can we run tensorflow application on imax6q processor ? If I'm trying to build neural network based on caffe or any other method can we run that application on this hardware ?

Can I have any example for this ? Previous experience with nitrogen6 max development bords ?

Regards,

Peter. 

标签 (4)
1 解答
5,061 次查看
markus_levy
NXP Employee
NXP Employee

Hi.

First of all, Tensorflow is not a neural network based application, it is a framework for neural networks. That said, you can definitely run a trained model based on TensorFlow or Caffe by using the DNN module inside the OpenCV library. Out of the box, it will run on the i.MX6Q and any other i.MX containing a CPU supporting Neon extensions. We will soon release an application note that explains how to better optimized the network before using OpenCV.

Best regards,

Markus

在原帖中查看解决方案

10 回复数
5,061 次查看
markus_levy
NXP Employee
NXP Employee

Common misperception is that a NN model cannot run on edge compute devices. For that matter, NXP has tools that allow user to take trained TF or caffe models and deploy inference engine on i.MX6 -i.MX8. Of course, one must always balance performance (inference time) with cost, power, etc. I've even seen TF models optimized to run on Kinetis and i.MX RT (about 5 frames per second, depending on the network).

5,061 次查看
balasubramaniyam
Contributor II

Hi Markus,

Which NXP tools allows the user to take trained TF or caffe models and deploy inference engine on i.MX6(Quad or QuadPLUS)?

Thanks & Regards,

Bala

0 项奖励
5,061 次查看
markus_levy
NXP Employee
NXP Employee

Currently we recommend use of OpenCV; it accelerates TF on Arm Neon.

0 项奖励
5,061 次查看
balasubramaniyam
Contributor II

Hi Markus,

I have one more doubt, is it works on i.MX6(Quad or QuadPLUS) or it's only works on i.MX8.

Because "AN12224.pdf" documents refering only on i.MX8.

Thanks & Regards,

Bala

0 项奖励
5,061 次查看
markus_levy
NXP Employee
NXP Employee

It works on any i.MX with Neon acceleration in the CPU. The SE team can port this to any specific BSP. So far it’s ported to 8M, 8QM, and 8MM.

5,060 次查看
voson
Contributor II

In file AN12224.pdf not supported IMX8MM. 

Could you tell me why?

0 项奖励
5,061 次查看
chris_lee
Contributor II

Hi Markus,

Would you please provide information about NXP tools that enable TF on i.MX6?

Thanks!

0 项奖励
5,062 次查看
markus_levy
NXP Employee
NXP Employee

Hi.

First of all, Tensorflow is not a neural network based application, it is a framework for neural networks. That said, you can definitely run a trained model based on TensorFlow or Caffe by using the DNN module inside the OpenCV library. Out of the box, it will run on the i.MX6Q and any other i.MX containing a CPU supporting Neon extensions. We will soon release an application note that explains how to better optimized the network before using OpenCV.

Best regards,

Markus

5,061 次查看
chris_lee
Contributor II

Just to make it clear, my interest is in your previous reply:

"NXP has tools that allow user to take trained TF or caffe models and deploy inference engine on i.MX6"

And I'm looking forward to learning how to do it.

Actually, I'm stocked on porting darknet-on-opencl to i.MX6 platform. Here's my post of asking help: i.mx6q darknet opencl error: out of resource 

My goal is exactly the same as in your post "deploy inference engine on i.MX6".

Hope the application note will be released soooooooooon!

And it would be appreciated if I can get notified as soon as the APP NOTE released.

Thanks.

0 项奖励
5,061 次查看
mtx512
Contributor V

From my experience you could run a pre-trained model on the imx6q with caffe or tensorflow however the biggest hurdle will be performance as these libraries normally require an external GPU card to get decent performance. We explored the option of using an external accelerator ie the Movidius Neural Computing Stick my see post.