I am trying to decode according to the VPU reference manual.
VPU_DecDecodeBuf()'s return value is Success, however *pOutBufRetCode is set to 0x101, which means the following bit flags are set: VPU_DEC_INPUT_USED, VPU_DEC_NO_ENOUGH_INBUF.
How is this "INBUF" (Input Buffer?) set anyway in the first place? I assume this "INBUF" can be set via VPU_DecRegisterFrameBuffer(), but this API is not shown in the "Decoding Calling Sequence" of the manual.
Any hints (eg sample source) would be appreciated!
Solved! Go to Solution.
>My question remains, why is VPU_DEC_NO_ENOUGH_INBUF set when I try to decode the output of my encode function?
So I got this fixed.
As what I understand from the actual behavior, for VPU to decode H264 properly, input encoded file should have SPS/PPS NALu, which my encode function failed to include initially. Adding SPS/PPS NALu to the encoded file resolved my case.
To NXP, it would have been helpful to debug this problem if the return code was more meaningful than VPU_DEC_NO_ENOUGH_INBUF (Up to now, I still have no idea what this means)
>My question remains, why is VPU_DEC_NO_ENOUGH_INBUF set when I try to decode the output of my encode function?
So I got this fixed.
As what I understand from the actual behavior, for VPU to decode H264 properly, input encoded file should have SPS/PPS NALu, which my encode function failed to include initially. Adding SPS/PPS NALu to the encoded file resolved my case.
To NXP, it would have been helpful to debug this problem if the return code was more meaningful than VPU_DEC_NO_ENOUGH_INBUF (Up to now, I still have no idea what this means)
Hi Rita,
I'm using yocto-real-time-edge bsp. Does this have anything to do with my original question?
A bit of a background, I want to implement a simple app that can do both encode/decode.
For the encoding part my input is a single YUV file. Based on the encoding API's return values, I think I was able to encode successfully (actual "encoded image" needs to be verified visually, though)
Now, I want to decode the above "encoded image", so basically I am trying to decode it back to the input YUV file of my encode function.
I followed the "Decoding Calling Sequence" of the manual, but it seems like I'm missing some implementations before calling VPU_DecDecodeBuf(), as the *pOutBufRetCode parameter of this function is set to 0x101: VPU_DEC_INPUT_USED, VPU_DEC_NO_ENOUGH_INBUF.
I want to know more about VPU_DEC_NO_ENOUGH_INBUF, especially on how to resolve this.
Thanks!
Hi @b_m ,
I help create 00644508 to our Vpu engineer, she will update to you in the case.
Any questions contact us freely
Wish you have a nice day
Best Regards
Rita
Hi @Rita_Wang
Thanks for replying!
Is "00644508" a support ticket number of sorts? If so, is there a way that I can view its content to be updated? Anyway, thanks for creating this!
I have an update regarding this post. I tried to input H264 file encoded by ffmpeg (instead of the encode function of my app), which resulted to VPU_DEC_NO_ENOUGH_INBUF not being set. Then, eventually managed to output VPU_DEC_OUTPUT_DIS being set.
My question remains, why is VPU_DEC_NO_ENOUGH_INBUF set when I try to decode the output of my encode function? The output encoded image starts with "0x00000001" so from my understanding it is an H264 file. Like could the reason be that there's something wrong with the encode function I made (high chance, as I'm a beginner), or maybe some parameters passed to decode VPU API didn't match the output of my encode function? I want to understand its possible causes, as I need both encode and decode functions to work.
Do not worry, our engineer will update to you by email, you can receive the massage by email when update to you, by now we are confirming it for you.
Which version BSP are you using?