Hello everyone,
I’m working with a monochrome camera on an i.MX8M Plus platform running Yocto 6.1.36 Mikeldore. The camera outputs raw grayscale frames in GREY8 pixel format.
Using this command:
v4l2-ctl -d /dev/video2 --set-fmt-video=width=1280,height=720,pixelformat=GREY --stream-mmap
I can capture video at around 108 fps, which is excellent.
However, for displaying the video, I need to convert from GREY8 to NV12 because most display sinks and hardware accelerators expect formats like NV12 for efficient rendering and colorspace handling. Without this conversion, the raw grayscale format cannot be displayed correctly or hardware-accelerated, which is why software conversion is currently required.
Currently, this conversion is done in user space using GStreamer’s videoconvert, but it causes high CPU usage and reduces streaming performance drastically — the effective fps drops from 30 fps down to about 10 fps.
My main question is:
Is it possible or recommended to implement the GREY8 → NV12 pixel format conversion inside the kernel driver or V4L2 pipeline on the i.MX8M Plus platform, ideally leveraging hardware acceleration?
I want to move this conversion to kernel space to reduce CPU load and achieve high fps streaming like the raw capture.
Any insights or pointers would be appreciated!
Thanks!