Hi NXP AI Champs
We have some test result from bladder size calculation.
The project is to find out the size of the bladder by AI.
They use Onnx, and made AI inference with CPU and NPU with the same quantized onnx model learned.
However, even with the same quantized onnx model, the size (image) of the bladder inferred by the CPU is significantly about 10%–30% less than the size (image) of the bladder inferred by the NPU. There's also the opposite.
When measured with 105 input data, most of them differ by about 10%, and about 30 differ by more than 30%. It doesn't even show a certain proportion.
If they used the same Quantified onnx model, shouldn't the inference images of the CPU and NPU be almost the same? I would like to ask you an answer to this question, and if there is any way to solve it.
Thanks.
Regards,
Hi Joanxie
The AP is used with i.MX 8M Plus
Part# : MIMX8ML8DVNLZAB (i.MX8M Plus)
Regards,
I have already mailed to you, pls check it
what processor do you use