Hi Leonardo,
It's the opposite, I'm trying to use Gstreamer pipeline to capture frames from camera and create a buffer to transfer to my OpenCV application. It's because I'm using a OV5640 camera and this camera doesn't work with V4L2 standard library.
In the case that you showed, I could use the appsink and get a buffer (I found a code that do it), then I could transfer data to opencv, but the appsink doesn't work (and I saw some topics about similar problems with appsink).
So I think a simple way to get frames is through the gstreamer (camera -> gstreamer -> opencv), but I'm learning how to use gstreamer right now.
I found this application: https://github.com/andreluizeng/gesture-recognition/tree/master/src
and it's similar of what I need to do, but instead of get data from a video file, I will get from camera. Please, Take a look at main.cpp and netplayer_gst.cpp, these two files contain all code using gstreamer. I'm working on this code now.
I intend to create a pipeline with mfw_v4lsrc and fakesink (I don't need to show the frames on a display, only for a test case and with opencv, because it's for a robot displacement control). I think I must create a pad between these plugins to get a buffer.
As soon I get some result I will post the code here.
Thanks for the reply,
Bruno