I’ve run into an issue I’m hoping someone can shed light on. It might be more of a question for the GStreamer dev mailing list but perhaps someone here can shed light on it.
I’m using two pipelines – one for decoding H.264 video with imxvpudec, and the other for rendering video with imxeglvivsink. The decoder pipeline terminates in an appsink and the rendering pipeline starts with an appsrc. I’m using gst_app_sink_pull_sample () to pull samples from the appsink and gst_app_src_push_sample () to push them to the appsrc.
All works fine if the video being decoded is 30 fps. But if the video is 15 fps then it is rendered in slow motion.
If I use a single GStreamer pipeline with the same stream of H.264 video then all is well at 30 and 15 fps.
I have debugged extensively (and learned a lot more about GStreamer doing it) and have confirmed that when the video is 15 fps the GST_BUFFER_DURATION() and GST_BUFFER_PTS() show the correct values.
I have also gotten the caps from the sample and from the imxeglvivsink sink pad and both show the correct frame rate.
I have also verified that the decoded frames are delivered to the render pipeline at 15 fps (via time stamped log messages).
So clearly there is something happening under the hood with a single GStreamer pipeline that is still missing with this split pipeline scenario.
Help would be greatly appreciated. I may have to use the single pipeline if I can’t solve it but it just doesn’t fit as nicely into our existing architecture.