AnsweredAssumed Answered

Adding frame buffer processing to GStreamer pipeline - hints?

Question asked by benhenricksen on Jul 21, 2015
Latest reply on Aug 3, 2015 by benhenricksen

I want to acquire images from a camera over the parallel bus, convert to YUV and write to a frame buffer, add text to the frame buffer (might be just blitting pre-formatted pixel blocks), the pass this to the VPU for H264 encoding. Like here:

Text and video streaming over network

We can achieve camera to encoder using GStreamer successfully, do you have any clues / hints as to how we might add this frame buffer processing stage? Is this a GStreamer element we need to write, or an extra bit to V4L2? Not sure where to start.

 

(Linux 3.10.17 on eConSystems iMX6Q SOM)

 

Just to make it clearer:

As Freescale are keen to point out in the video in to video encode application (set up with GStreamer) the data is received and sorted by the IPU and then handed off to the VPU for encoding with out any intervention by the CPU.

So how do we break this chain and add in an image processing element? Is it a GStreamer element we need? I've coded DirectShow filters before but not used GStreamer. Any code examples I should refer to our documents?

Outcomes