Hi there,
When I read the tutorial of Yocto/Gstreamer
Yocto/gstreamer/streaming – Gateworks
I saw that the example pipeline for UDP/RTP(raw) always start with decoder part first, then start encoder.
I wonder if there is any specific reason for this, can anybody help me to explain it?
In my practical test with iMX6DL board and streaming from camera, also I have the same result. If starting decoder from client side first, then start streaming from iMX6, I can see the video perfectly. In opposite case, it does not work though two streaming pipeline still is running, but I cannot see the decoded video displayed on Ubuntu client PC