Hi.
I'm starting working in a software that uses video stream getting h264 video from a camera, writing it to a shared memory and than reading it from such memory to show it in the GUI. I'm new with GStreamer and it is version 1.8.3.
The C++ code that produces the content into the shared memory uses a pipe with the following characteristics:
Video source = imxv4l2src or imxv4l2videosrc (I'm not sure what exactly)
attributes: do-timestamp =true , fps-n=15
Video encoder = imxvpuenc_h264
attributes: idr-interval=3, bitrate=1536
Video sink = shmsink
attributes: socket-path=video_shared_tmp, wait-for-connection=true, shm-size=1000000, sync=false
The C++ code that access the shared memory to reproduce the video for the user uses a pipe with the following characteristics:
shmsrc socket-path=video_shared_tmp is-live=true do-timestamp=true ! h264parse ! imxvpudec ! imxipuvideotransform
It works very well. However, when I try to retrieve a unique jpeg picture (a photo) from the same shared memory, there is a error: "No valid frames decoded before end of stream".
The pipe I'm using to get the picture is like this:
shmsrc socket-path=video_shared_tmp is-live=true do-timestamp=true num-buffers=1 ! queue ! h264parse ! imxvpudec ! videorate ! video/x-raw,framerate=1/1 ! jpegenc ! filesink location=/temp/file.jpeg
What should I change in my pipe?
Any hint will be very helpful !!
Best regards.
Rodrigo Pimenta Carvalho
refer to the VPU API,
JPEG tools
• MJPEG Baseline Process Encoder and Decoder
• Baseline ISO/IEC 10918-1 JPEG compliance
• Support 1 or 3 color components
• 3 component in a scan (interleaved only)
• 8 bit samples for each component
• Support 4:2:0, 4:2:2, 2:2:4, 4:4:4 and 4:0:0 color format (max. six 8x8 blocks in one MCU)
• Minimum encoding size is 16x16 pixels.
so first check your JPEG profile is Baseline or not, then try to use jpegdec instead of vpudec