I am developing a model on Simulink with MBDT 1.3. In the ADC block, there is one function called Adc_Calibrate. What does this function do?
I put it in an enabled subsystem and this is the output - raw data from one of the ADC blocks on the same AdcHwUnit as the one selected with Adc_Calibrate:
1 - When the subsystem is enabled, the raw values look like this on FreeMASTER:
And when it is not enabled, it looks like this:
It is a similar graph but the difference is in the maximum values it reaches. When the subsystem is not enabled, the maximum value it reaches is about 150 and when it is enabled it reaches around 1400.
What is the reason for this and what exactly is the purpose of the Adc_Calibrate function?
Thanks
Hello Rishi Chapekar and Amol Parikh,
The Adc_Calibrate function inside the ADC blocks implements the ADC Calibration activity, that can be added in the Initialize Subsystem:
The Adc_Calibrate function will help the ADC peripheral to minimize the reading errors and get closer to the Ideal response:
Figure 3. Effect of calibration error on ADC response
An in-depth analysis of the recommended ADC calibration can be found here: 16-bit SAR ADC calibration - NXP Community
It is recommended to use the ADC Calibration block only in the Initialization part of the model, otherwise running it in a repeated triggered subsystem will result in the values getting filtered as your results.
Best practices to increase the ADC accuracy can be found also in AN12217: S32K1xx ADC guidelines, specs, and configuration – Application Note (nxp.com).
Best regards,
Stefan V.
I am also facing the same problem, does anyone have any solution for it?