eiQ - Distributed CNN inference

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

eiQ - Distributed CNN inference

549 Views
nullbyte91
Contributor I

Hi Team,

Is it possible to distribute the CNN inference engine to both CPU and GPU using ARMNN/TFLite?
We have our custom network that can run both CPU and GPU. We are looking for an option to distribute the inference engine to both CPU and GPU to get higher FPS.

Regards,
Jegathesan S

0 Kudos
3 Replies

524 Views
Yuri
NXP Employee
NXP Employee

@nullbyte91 
Hello,

   from app team:

  In short, there is no such option in eIQ that can support a CNN to run on both CPU and GPU at the same time. 

  However, it's possible from the application level to achieve that. As you can find in the eIQ demo, we can switch CPU/GPU inference in this application:

https://source.codeaurora.org/external/imxsupport/pyeiq/tree/eiq/apps/switch_video/switch_video.py?h...

   Let's assume that GPU inference will take 0.1s to finish one frame and CPU inference will take 0.5s to finish one frame. We can give frame 6, 12 to CPU inference thread to calculate and frame 1-5, 7-11 to GPU inference thread to calculate. By the end of 1 second, we will have totally 12 frames finished inference. Comparing to use GPU only, it can raise FPS from 10 to 12.

    But in reality, it's not easy to keep perfect sync between CPU and GPU. And usually GPU is much faster ,comparing to CPU, to run CNN inference. Therefore it may not be worthy to include CPU but adding this SW complexity.

 

Regards,
Yuri,

0 Kudos

533 Views
nullbyte91
Contributor I

Hi @Yuri

I'm using MX 8QuadMax.

0 Kudos

540 Views
Yuri
NXP Employee
NXP Employee

@nullbyte91 
Hello,

   What i.MX device is used in the case?

Regards,
Yuri.

0 Kudos