Hello.
Normally inference is run on the GPU or NPU based on the USE_GPU_INFERENCE=0/1 env. variable, but I would like to make this choice in code. In the Machine Learning guide it says that that variable is "directly read by the HW acceleration driver".
Is it feasible to modify the GPU/NPU unified driver to expose this functionality to high abstraction layers? Where can I find it?
If not, What else could I do?
Thanks
已解决! 转到解答。
My understanding is that the so called NPU/GPU driver is found in the Yocto directory `meta-freescale/recipes-graphics/imx-gpu-viv`, but the SRC_URI is given as `${FSL_MIRROR}/${BPN}-${PV}-${IMX_SRCREV_ABBREV}.bin` which at runtime resolves to `https://www.nxp.com/lgfiles/NMG/MAD/YOCTO//imx-gpu-viv-6.4.11.p2.6-aarch64-bc7b6a2.bin`.
Looking in the forum I found this user asking for the source code (https://community.nxp.com/t5/i-MX-Processors/GPU-source-code-with-opencl/m-p/1224323)
It was not public in 2021, is it still not public now?
Thanks