I use google-coral based on Mendel-OS and USB camera to capture image and inference segment .
Although the camera can support MJPEG-1280x720-30FPS, I only get 15FPS and very high CPU usage.
What I try
First, I try to use opencv library (VideoCapture ) to control my USB camera.
cap.open(/dev/video1, CAP_V4L);
Opencv will auto-covert frame to BGR format. I think the convert process is dependent on CPU and cause very high cpu usage.
I also try using opencv gstream pipline.
cap = VideoCapture("v4l2src device=/dev/video0 ! image/jpeg,width=1280,height=720,framerate=30/1 ! jpegdec ! video/x-raw ! videoconvert ! appsink");
The gstreamer pipline method takes more cpu usage.
So I try to use v4l2 direct to capture the raw frame(MJPEG).
At the present stage, I use opencv to decode the frame to RGB to verify the capture frame correct or not.
I want to transfer MJPEG to YUYV and then I can extract Y-channnel for inference.
By the way, I know some camera support YUYV format but the FPS for YUYV format is slower than MJPEG in usual condition.
I survey the NXP official website. It provide several test and example code for YOCTO project. But I can't using it for google-coral based on Mendel-OS.
I also try to build the libimxvpuapi. Unfortunately I failed. I didn't find "usr/include/imx/mxcfb.h" in my SOM, which need to set for SYSROOT.
How do I using vpu on the google-coral to decode the frame?