Hi All,
My custom hardware is based on imax6q processor with 2GB memory which is most similar to nitrogen6 max development board.
Can we run tensorflow application on imax6q processor ? If I'm trying to build neural network based on caffe or any other method can we run that application on this hardware ?
Can I have any example for this ? Previous experience with nitrogen6 max development bords ?
Regards,
Peter.
解決済! 解決策の投稿を見る。
Hi.
First of all, Tensorflow is not a neural network based application, it is a framework for neural networks. That said, you can definitely run a trained model based on TensorFlow or Caffe by using the DNN module inside the OpenCV library. Out of the box, it will run on the i.MX6Q and any other i.MX containing a CPU supporting Neon extensions. We will soon release an application note that explains how to better optimized the network before using OpenCV.
Best regards,
Markus
Common misperception is that a NN model cannot run on edge compute devices. For that matter, NXP has tools that allow user to take trained TF or caffe models and deploy inference engine on i.MX6 -i.MX8. Of course, one must always balance performance (inference time) with cost, power, etc. I've even seen TF models optimized to run on Kinetis and i.MX RT (about 5 frames per second, depending on the network).
Hi Markus,
Which NXP tools allows the user to take trained TF or caffe models and deploy inference engine on i.MX6(Quad or QuadPLUS)?
Thanks & Regards,
Bala
Hi Markus,
I have one more doubt, is it works on i.MX6(Quad or QuadPLUS) or it's only works on i.MX8.
Because "AN12224.pdf" documents refering only on i.MX8.
Thanks & Regards,
Bala
It works on any i.MX with Neon acceleration in the CPU. The SE team can port this to any specific BSP. So far it’s ported to 8M, 8QM, and 8MM.
In file AN12224.pdf not supported IMX8MM.
Could you tell me why?
Hi Markus,
Would you please provide information about NXP tools that enable TF on i.MX6?
Thanks!
Hi.
First of all, Tensorflow is not a neural network based application, it is a framework for neural networks. That said, you can definitely run a trained model based on TensorFlow or Caffe by using the DNN module inside the OpenCV library. Out of the box, it will run on the i.MX6Q and any other i.MX containing a CPU supporting Neon extensions. We will soon release an application note that explains how to better optimized the network before using OpenCV.
Best regards,
Markus
Just to make it clear, my interest is in your previous reply:
"NXP has tools that allow user to take trained TF or caffe models and deploy inference engine on i.MX6"
And I'm looking forward to learning how to do it.
Actually, I'm stocked on porting darknet-on-opencl to i.MX6 platform. Here's my post of asking help: i.mx6q darknet opencl error: out of resource
My goal is exactly the same as in your post "deploy inference engine on i.MX6".
Hope the application note will be released soooooooooon!
And it would be appreciated if I can get notified as soon as the APP NOTE released.
Thanks.
From my experience you could run a pre-trained model on the imx6q with caffe or tensorflow however the biggest hurdle will be performance as these libraries normally require an external GPU card to get decent performance. We explored the option of using an external accelerator ie the Movidius Neural Computing Stick my see post.