I'm having some trouble getting the ADC calibration to work. The description in Rev 4 of the RM refers to an "ADC_TEST" register which is not described in the interface description. Also, the expected effect of the calibration on the register is not explained, namely, does the correct value get put into the ADC_CAL register automatically, or must it be copied there from R1 (which incidentally shows up as R0 in the Peripheral display under DS-5). I don't see the CALF flag but the only "interesting" register content is a small number (2 or 3) in R0 (R1). ADCn_CAL is still 0.
Normal data acquisition is working fine; I'm using a PIT interrupt to sample at 1 KHz and then start another conversion. It's just the calibration that is giving me trouble.
Any thoughts?
Thanks,
Charlie
Solved! Go to Solution.
Dear Charlie,
Please find our comments below:
==========================================
Q1.
… some trouble getting the ADC calibration to work:
The description in Rev 4 of the RM refers to the "ADC_TEST" register, which is not described in the interface description.
A1.
This register is for internal use only (to be removed from the text), and all the customers have to know is that its default value is already set for calibration.
==========================================
Q2.
Also, the expected effect of the calibration on the register is not explained - namely, does the correct value get put into the ADC_CAL register automatically, or must it be copied there from R1 (which incidentally shows up as R0 in the Peripheral display under DS-5)?
I don't see the CALF flag but the only "interesting" register content is a small number (2 or 3) in R0 (R1), ADCn_CAL still being 0.
A2.
As defined in the Calibration section, after setting the CAL bit (bit 7) in ADC_GC, the user has to monitor this bit only to check when it goes zero (0) automatically.
At the end of calibration the user has to check the CALF bit of ADC_GS - if it equals 1 when the calibration failed. When passed, the calibrated value is automatically loaded into the ADC_CAL[CAL_CODE] field.
Also, as mentioned in the Calibration section, if a user applies reset after calibration, then either re-calibration is required or the user has to save the prior calibration value and re-write it back once the reset state is over. The ADC has only one reset input, and if this reset is applied, re-calibration or loading of CAL_CODE saved from the earlier Calibration run is needed.
==========================================
Q3.
I don't see the CALF flag, but the only "interesting" register content is a small number (2 or 3) in R0 (R1), ADCn_CAL still being 0.
A3.
If still relevant, may you give additional details of this, please?
==========================================
Q4.
Normal data acquisition is working fine; I'm using a PIT interrupt to sample at 1 KHz and then start another conversion. It's just the calibration that is giving me trouble…
…I … had no difficulty getting basic conversions working simply by reading the RM. However, the calibration does not appear to work exactly as described in the RM. … the MQX source code … didn’t really tell me much (and I’m not even sure it’s for the Vybrid). In particular, the RM says to poll the CAL flag and then check CALF. It seems that the calibration may still be in progress when CAL clears.
A4.
It is supposed to work as described. The user has to check the CAL bit of ADC_GC after setting it.
It automatically clears when calibration is over, and the result can then be seen in CALF bit (0=pass, 1=fail). The ADC_CAL[CAL_CODE] is automatically loaded if it passes.
The user also can choose to get interrupt instead of polling the CAL bit as described in our documentation:
“At the end of a calibration sequence the COCO[0] bit of the ADC_HS register will be set.
The ADC_HCn[AIEN] bit can be used to allow an interrupt to occur at the end of a calibration sequence. If, at the end of calibration routine, the CALF bit is not set, the automatic calibration routine completed successfully.”
==========================================
Sincerely yours, Naoum Gitnik.
Dear Charlie,
Before we dig deeper, may tell me if you looked into our Vybrid BSP (Board Support Platform), please? – E.g., examples for the 2 supported OSs:
MQX: http://www.freescale.com/webapp/sps/site/overview.jsp?code=MQXSWDW
Linux: https://linuxlink.timesys.com/register/freescale.
Its function is two-fold:
Sincerely yours, Naoum Gitnik.
Good Morning Naoum,
I am using the OS from MQX4.0.1. However, this is only an interim solution since our Vybrid product will actually use AutoSAR (customer requirement).
And yes, I have used some of the I/O from MQX, however, we have a lot of mid level code which will be reused for this project and it requires a specific API and level of performance from the low level drivers. Therefore I am using the MQX I/O only as an interim solution where necessary (e.g., I used the UART driver initially, however, it does not have enough throughput for our application, so I have replaced it with a solution using DMA).
In any case, I felt the A/D peripheral was simple enough that it would be no big deal to write a driver for it, and in fact had no difficulty getting basic conversions working simply by reading the RM. However, the calibration does not appear to work exactly as described in the RM. I looked at the MQX source code for an A/D driver but it didn’t really tell me much (and I’m not even sure it’s for the Vybrid). In particular, the RM says to poll the CAL flag and then check CALF. It seems that the calibration may still be in progress when CAL clears.
Regards,
Charlie
Dear Charlie,
Please find our comments below:
==========================================
Q1.
… some trouble getting the ADC calibration to work:
The description in Rev 4 of the RM refers to the "ADC_TEST" register, which is not described in the interface description.
A1.
This register is for internal use only (to be removed from the text), and all the customers have to know is that its default value is already set for calibration.
==========================================
Q2.
Also, the expected effect of the calibration on the register is not explained - namely, does the correct value get put into the ADC_CAL register automatically, or must it be copied there from R1 (which incidentally shows up as R0 in the Peripheral display under DS-5)?
I don't see the CALF flag but the only "interesting" register content is a small number (2 or 3) in R0 (R1), ADCn_CAL still being 0.
A2.
As defined in the Calibration section, after setting the CAL bit (bit 7) in ADC_GC, the user has to monitor this bit only to check when it goes zero (0) automatically.
At the end of calibration the user has to check the CALF bit of ADC_GS - if it equals 1 when the calibration failed. When passed, the calibrated value is automatically loaded into the ADC_CAL[CAL_CODE] field.
Also, as mentioned in the Calibration section, if a user applies reset after calibration, then either re-calibration is required or the user has to save the prior calibration value and re-write it back once the reset state is over. The ADC has only one reset input, and if this reset is applied, re-calibration or loading of CAL_CODE saved from the earlier Calibration run is needed.
==========================================
Q3.
I don't see the CALF flag, but the only "interesting" register content is a small number (2 or 3) in R0 (R1), ADCn_CAL still being 0.
A3.
If still relevant, may you give additional details of this, please?
==========================================
Q4.
Normal data acquisition is working fine; I'm using a PIT interrupt to sample at 1 KHz and then start another conversion. It's just the calibration that is giving me trouble…
…I … had no difficulty getting basic conversions working simply by reading the RM. However, the calibration does not appear to work exactly as described in the RM. … the MQX source code … didn’t really tell me much (and I’m not even sure it’s for the Vybrid). In particular, the RM says to poll the CAL flag and then check CALF. It seems that the calibration may still be in progress when CAL clears.
A4.
It is supposed to work as described. The user has to check the CAL bit of ADC_GC after setting it.
It automatically clears when calibration is over, and the result can then be seen in CALF bit (0=pass, 1=fail). The ADC_CAL[CAL_CODE] is automatically loaded if it passes.
The user also can choose to get interrupt instead of polling the CAL bit as described in our documentation:
“At the end of a calibration sequence the COCO[0] bit of the ADC_HS register will be set.
The ADC_HCn[AIEN] bit can be used to allow an interrupt to occur at the end of a calibration sequence. If, at the end of calibration routine, the CALF bit is not set, the automatic calibration routine completed successfully.”
==========================================
Sincerely yours, Naoum Gitnik.
1. I observe the same issue with self-calibration of no value being written to ADC1_CAL after calibration completes successfully (CAL bit clears and CALF bit is not set). However some value getting written to ADC1_R0. This happens when VBGH/VBGL is selected as voltage reference, but if VREFH/VREFL with all other settings being the same, the calibration gives results in the ADC1_CAL. In the first case where VBGH/VBHL is selected and calibration does not have any results, subsequent conversions seem to have significant offset from the true value. Any suggestions?
2. I need ADC performance/accuracy characterization with internal bandgap used as voltage reference. How can I get that data?
Thanks,
Anna
Anna,
Did you resolve this issue? I am having something similar.
If you can, please post the code/solution.
thanks,
Tomasz
Hello Tomasz,
No this issue was not resolved, we resorted to just one-time 'manual' calibration in our manufacturing process.
Anna
Anna,
Thanks for the update. Glad you found a workaround.
Best,
Tomasz
Its unfortunate NXP was not able to help us (they claim 0 value output is the correct value, but according to my testing it was always 0 on several chips), as workaround reduces accuracy to perhaps marginal