Hi there,
When I read the tutorial of Yocto/Gstreamer
Yocto/gstreamer/streaming – Gateworks
I saw that the example pipeline for UDP/RTP(raw) always start with decoder part first, then start encoder.
I wonder if there is any specific reason for this, can anybody help me to explain it?
In my practical test with iMX6DL board and streaming from camera, also I have the same result. If starting decoder from client side first, then start streaming from iMX6, I can see the video perfectly. In opposite case, it does not work though two streaming pipeline still is running, but I cannot see the decoded video displayed on Ubuntu client PC
Dear Artur,
Thanks for your reply. I just ask because I saw some pipelines from gstreamer that guide us to start pipeline from server, then copy the caps properties to the client side. It also works
Of course, the receiving part should start first to wait for the incoming stream and then handle and decode the stream header correctly. In other case, the receiving part will not be able to synchronize with the stream.
Have a great day,
Artur
-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------