I want to acquire images from a camera over the parallel bus, convert to YUV and write to a frame buffer, add text to the frame buffer (might be just blitting pre-formatted pixel blocks), the pass this to the VPU for H264 encoding. Like here:
Text and video streaming over network
We can achieve camera to encoder using GStreamer successfully, do you have any clues / hints as to how we might add this frame buffer processing stage? Is this a GStreamer element we need to write, or an extra bit to V4L2? Not sure where to start.
(Linux 3.10.17 on eConSystems iMX6Q SOM)
Just to make it clearer:
As Freescale are keen to point out in the video in to video encode application (set up with GStreamer) the data is received and sorted by the IPU and then handed off to the VPU for encoding with out any intervention by the CPU.
So how do we break this chain and add in an image processing element? Is it a GStreamer element we need? I've coded DirectShow filters before but not used GStreamer. Any code examples I should refer to our documents?
For adding text into video stream, you can install pango and cairo plugins.
“How to add text watermark on video stream”
https://community.freescale.com/thread/303114
“how to support subtitle on the imx6Q board”
https://community.freescale.com/docs/DOC-106348
Have a great day,
Yuri
-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------
Thanks for the suggestions. I'll give it a try. But I'm concerned that it will be too slow at HD resolutions ( 720p30 minimum).
See Re: Text and video streaming over network
Hence I was asking how one might access the Y data as suggested. But I'll give Pango a try first.