I try to use the vpu to decode a H264 stream but vpu_DecGetInitialInfo() returns RETCODE_FAILURE. I think this is due to bad sps pps information propagation to the vpu.
Some answers on the following questions should help me out:
1) what is spSaveBuffer for and how big should I make it? What to put in it or what to read out of it? (I know it must be physical memory)
2) I extract PPS and SPS information from the extradata available in my H264 stream (I already found out how to do it in the case of annex-b format and also in the avcc format).
Why am I the only one in the (google) world using the methods vpu_DecGiveCommand with DEC_SET_SPS_RBSP and DEC_SET_PPS_RBSP as parameters? Is it the right way to do it using the DecParamSet structure?
3) If the previous method was not the way to do it, how should I give the SPS PPS information to the VPU? Should I prepend the stream with this extradata?
4) Where can I retrieve more information from the RETCODE_FAILURE that tells me nothing. (I read already the VPU_API_RM_L3.0.35_1.1.0.pdf)
5) what happens when there are more than 1 pps data? Doesn't the vpu support more than 1? In the gst-fsl-plugins-3.5.7-1.0.0 example, they use only the first one, if any.
Note: I managed to decode a stream from a camera that did have only SPS information and no PPS information in the stream. But when trying it from a H264 file in avcc format with pps information, it fails, or when receiving a H264 stream from another camera with pps information it fails again.