I am integrating a 3rd party camera sensor from ON-Semiconductor with i.MX8M Mini (imx-yocto-L5.4.24_2.1.0).
I have a use-case where I need to record a 1080p30fps h264 encoded video and while the video is recording I need to capture a 5MP JPEG image from the same camera.
I used below command to record 1080p30fps video in background:
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1800 ! 'video/x-raw, width=1920, height=1080, framerate=(fraction)30/1, format=(string)UYVY' ! vpuenc_h264 ! h264parse ! mp4mux ! filesink location=1920x1080.mp4 &
And while the video was recording, I used below command to capture a 5MP JPEG image:
gst-launch-1.0 -v v4l2src device="/dev/video0" num-buffers=1 ! 'video/x-raw, width=(int)2592, height=(int)1944, format=(string)UYVY' ! jpegenc ! filesink location=image.jpg
But I got error that the video0 node was already in use.
The camera supports maximum 5MP resolution. Both the above commands work successfully individually for recording a video or capturing JPEG image. But I need to do both simultaneously.
Is it possible to achieve this use-case? If yes, how would I achieve this?
Thanks in advance,
Did you try Video0 and Video1 they will be at the same time. Please check the cameras in videoxx with ls /dev/video before you make the command line with gstreamer.
I have two video nodes under /dev/ and they are video0 and video1. Video0 is for input stream to IMX and video1 is for output stream from IMX.
I using a single 5MP camera, so I have only one node for camera input i.e. /dev/video0. From this single camera I need to capture two streams, one for 5MP image capture and one for 1080p video capture.
Hi @Bio_TICFSL ,
Yes. So would it be possible using some SW encoder or tools to get create two streams ?
I want to achieve below use-case:
Record a 1080p30fps h264 encoded video and while the video is recording, capture a 5MP JPEG image from the same camera (from /dev/video0 node).