I am using a MKE18F512VLH16 MCU. Currently I am using two of the three ADC's. ADC0 and ADC2.
I used the ADC12_DoAutoCalibration function, provided by NXP, to calibrate each of the ADC's. When I got conversion results that I liked, I stored the calibration of each ADC into FlexRAM. When the system comes out of reset, the ADC will use those calibration coefficients.
However, I noticed that ADC2 does not seem to give me consistent results when using a stored calibration. ADC0 is better but still not perfect. The results are consistent when the system is running. It's just that they are different then the last reset.
I read that the ADC needs to be calibrated every time it starts up. So I added a call to do one auto calibration before re-calibrating it to the stored NVM calibration. This did not help. I still see a fairly egregious difference in the results.
I was able to track some results across resets. Ch1(Blue) is for ADC0, and Ch5(Orange) is for ADC2. The two different graphs show two different duty cycles for the PWM which drives some circuitry, which is fed into the ADC. The point isn't to see the difference in the two channels, but how the voltages of each ADC change over reset to reset.
I am wondering why I see such variations in results, and also why ADC0 seems to perform better then ADC2? They both use stored calibrations. I would also expect them to be consistent across resets.
I wanted to start a discussion to see if anyone could help with this issue.