I am able to encode live PAL video with the following:
gst-launch-0.10 -e mfw_v4lsrc sensor-width=720 sensor-height=288 capture-width=720 capture-height=576 preview=true preview-width=1024 preview-height=768 fps-n=25 bg=true ! mfw_vpuencoder ! avimux name=mux ! filesink location=/mnt/usb1/PALfps25f.avi
I can call ioctl so that the graphics and the video are 50/50 alpha blended on the LCD screen. However I would like to be able to encode what is on the screen, rather than the raw PAL video. Is there a place in RAM where the two framebuffers (graphics /dev/fb0 and video /dev/fb2) are combined and stored, or is the combined image sent straight to the LCD display?