Can we run tensorflow or any other framework for neural networks based application on imx6q ?

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Can we run tensorflow or any other framework for neural networks based application on imx6q ?

Jump to solution
7,359 Views
peteramond
Contributor V

Hi All,

My custom hardware is based on imax6q processor with 2GB memory which is most similar to nitrogen6 max development board.

Can we run tensorflow application on imax6q processor ? If I'm trying to build neural network based on caffe or any other method can we run that application on this hardware ?

Can I have any example for this ? Previous experience with nitrogen6 max development bords ?

Regards,

Peter. 

Labels (4)
1 Solution
4,901 Views
markus_levy
NXP Employee
NXP Employee

Hi.

First of all, Tensorflow is not a neural network based application, it is a framework for neural networks. That said, you can definitely run a trained model based on TensorFlow or Caffe by using the DNN module inside the OpenCV library. Out of the box, it will run on the i.MX6Q and any other i.MX containing a CPU supporting Neon extensions. We will soon release an application note that explains how to better optimized the network before using OpenCV.

Best regards,

Markus

View solution in original post

10 Replies
4,901 Views
markus_levy
NXP Employee
NXP Employee

Common misperception is that a NN model cannot run on edge compute devices. For that matter, NXP has tools that allow user to take trained TF or caffe models and deploy inference engine on i.MX6 -i.MX8. Of course, one must always balance performance (inference time) with cost, power, etc. I've even seen TF models optimized to run on Kinetis and i.MX RT (about 5 frames per second, depending on the network).

4,901 Views
balasubramaniyam
Contributor II

Hi Markus,

Which NXP tools allows the user to take trained TF or caffe models and deploy inference engine on i.MX6(Quad or QuadPLUS)?

Thanks & Regards,

Bala

0 Kudos
4,901 Views
markus_levy
NXP Employee
NXP Employee

Currently we recommend use of OpenCV; it accelerates TF on Arm Neon.

0 Kudos
4,901 Views
balasubramaniyam
Contributor II

Hi Markus,

I have one more doubt, is it works on i.MX6(Quad or QuadPLUS) or it's only works on i.MX8.

Because "AN12224.pdf" documents refering only on i.MX8.

Thanks & Regards,

Bala

0 Kudos
4,901 Views
markus_levy
NXP Employee
NXP Employee

It works on any i.MX with Neon acceleration in the CPU. The SE team can port this to any specific BSP. So far it’s ported to 8M, 8QM, and 8MM.

4,900 Views
voson
Contributor II

In file AN12224.pdf not supported IMX8MM. 

Could you tell me why?

0 Kudos
4,901 Views
chris_lee
Contributor II

Hi Markus,

Would you please provide information about NXP tools that enable TF on i.MX6?

Thanks!

0 Kudos
4,902 Views
markus_levy
NXP Employee
NXP Employee

Hi.

First of all, Tensorflow is not a neural network based application, it is a framework for neural networks. That said, you can definitely run a trained model based on TensorFlow or Caffe by using the DNN module inside the OpenCV library. Out of the box, it will run on the i.MX6Q and any other i.MX containing a CPU supporting Neon extensions. We will soon release an application note that explains how to better optimized the network before using OpenCV.

Best regards,

Markus

4,901 Views
chris_lee
Contributor II

Just to make it clear, my interest is in your previous reply:

"NXP has tools that allow user to take trained TF or caffe models and deploy inference engine on i.MX6"

And I'm looking forward to learning how to do it.

Actually, I'm stocked on porting darknet-on-opencl to i.MX6 platform. Here's my post of asking help: i.mx6q darknet opencl error: out of resource 

My goal is exactly the same as in your post "deploy inference engine on i.MX6".

Hope the application note will be released soooooooooon!

And it would be appreciated if I can get notified as soon as the APP NOTE released.

Thanks.

0 Kudos
4,901 Views
mtx512
Contributor V

From my experience you could run a pre-trained model on the imx6q with caffe or tensorflow however the biggest hurdle will be performance as these libraries normally require an external GPU card to get decent performance. We explored the option of using an external accelerator ie the Movidius Neural Computing Stick my see post.