Hello.
Normally inference is run on the GPU or NPU based on the USE_GPU_INFERENCE=0/1 env. variable, but I would like to make this choice in code. In the Machine Learning guide it says that that variable is "directly read by the HW acceleration driver".
Is it feasible to modify the GPU/NPU unified driver to expose this functionality to high abstraction layers? Where can I find it?
If not, What else could I do?
Thanks
Solved! Go to Solution.
Hi @StefanoN!
The source code is not public yet, even for us due to the GPU is not an NXP IP, our vendor provides the compiled binary for us.
I understand. Nonetheless can you help me locate the code for the unified driver?
Hi @StefanoN!
Thank you for contacting NXP Support!
Unfortunately there is no way to do it different that we mentioned in our MACHINE LEARNING USER GUIDE.
Best Regards!
Chavira
My understanding is that the so called NPU/GPU driver is found in the Yocto directory `meta-freescale/recipes-graphics/imx-gpu-viv`, but the SRC_URI is given as `${FSL_MIRROR}/${BPN}-${PV}-${IMX_SRCREV_ABBREV}.bin` which at runtime resolves to `https://www.nxp.com/lgfiles/NMG/MAD/YOCTO//imx-gpu-viv-6.4.11.p2.6-aarch64-bc7b6a2.bin`.
Looking in the forum I found this user asking for the source code (https://community.nxp.com/t5/i-MX-Processors/GPU-source-code-with-opencl/m-p/1224323)
It was not public in 2021, is it still not public now?
Thanks
Hi @StefanoN!
The source code is not public yet, even for us due to the GPU is not an NXP IP, our vendor provides the compiled binary for us.