Hi everyone,
I am rather new at developing for the GPU (especially in Python), so I might make some inaccurate statements.
We are using the iMX 8M Plus board, running yocto with OpenCL enabled.
We wanted to run the MOG2 algorithm (coded in Python>=3.10) on the GPU, however, this does not seem to be supported like how it is supported on cuda (I can just run the cv::cuda::BackgroundSubtractorMOG2 Class , and it will run on the GPU).
I thought that maybe converting the images to cv::UMat, such that I could make use of the acceleration of the OpenCL libraries, however, this makes everything slower.
Is there another way to run the cv::BackgroundSubtractorMOG2 Class on the GPU using the delegated of the NXP eIQ toolkit? Would I have to create an OpenCL kernel to run this code?
Hello,
For more understanding of this issue please read the machine learning user guide document, you are talking about cuda the i,MX family support OpenCL, in the document there are various examples on how to use OpenCV and OpenCL.
https://www.nxp.com/docs/en/user-guide/IMX-MACHINE-LEARNING-UG.pdf
Regards