Choosing where to run inference from application code - GPU/NPU

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 
已解决

Choosing where to run inference from application code - GPU/NPU

跳至解决方案
698 次查看
StefanoN
Contributor I

Hello.

Normally inference is run on the GPU or NPU based on the USE_GPU_INFERENCE=0/1 env. variable, but I would like to make this choice in code. In the Machine Learning guide it says that that variable is "directly read by the HW acceleration driver".

StefanoN_0-1743595701752.png

Is it feasible to modify the GPU/NPU unified driver to expose this functionality to high abstraction layers? Where can I find it?

If not, What else could I do?

Thanks

 

 

0 项奖励
回复
1 解答
649 次查看
Chavira
NXP TechSupport
NXP TechSupport

Hi @StefanoN!

 

The source code is not public yet, even for us due to the GPU is not an NXP IP, our vendor provides the compiled binary for us.

在原帖中查看解决方案

4 回复数
671 次查看
StefanoN
Contributor I

I understand. Nonetheless can you help me locate the code for the unified driver?

0 项奖励
回复
685 次查看
Chavira
NXP TechSupport
NXP TechSupport

Hi @StefanoN!

Thank you for contacting NXP Support!

 

Unfortunately there is no way to do it different that we mentioned in our MACHINE LEARNING USER GUIDE.

 

Best Regards!

Chavira

0 项奖励
回复
655 次查看
StefanoN
Contributor I

My understanding is that the so called NPU/GPU driver is found in the Yocto directory `meta-freescale/recipes-graphics/imx-gpu-viv`, but the SRC_URI is given as `${FSL_MIRROR}/${BPN}-${PV}-${IMX_SRCREV_ABBREV}.bin` which at runtime resolves to `https://www.nxp.com/lgfiles/NMG/MAD/YOCTO//imx-gpu-viv-6.4.11.p2.6-aarch64-bc7b6a2.bin`.

Looking in the forum I found this user asking for the source code (https://community.nxp.com/t5/i-MX-Processors/GPU-source-code-with-opencl/m-p/1224323)

It was not public in 2021, is it still not public now?

 

Thanks

0 项奖励
回复
650 次查看
Chavira
NXP TechSupport
NXP TechSupport

Hi @StefanoN!

 

The source code is not public yet, even for us due to the GPU is not an NXP IP, our vendor provides the compiled binary for us.