Hi Paul,
Thanks for the fast reply. My calibration function looks like this:
static uint32_t adc_calibrate(void)
{
uint32_t ad_calib = 0;
irq_disableInt(irq_ADC); //Disable the interrupt while calibrating
ADC0_SC3 &= ~ADC_SC2_ADTRG_MASK; // Enable Software Conversion Trigger for Calibration Process
ADC0_SC3 &= (~ADC_SC3_ADCO_MASK & ~ADC_SC3_AVGS_MASK); // set single conversion, clear avgs bitfield for next writing
ADC0_SC3 |= (ADC_SC3_AVGE_MASK | ADC_SC3_AVGS(3)); // Turn averaging ON and set at max value ( 32 )
ADC0_SC3 |= ADC_SC3_CAL_MASK; // Start CAL
while ((ADC0_SC1A & ADC_SC1_COCO_MASK) != ADC_SC1_COCO_MASK)
; // Wait until calibration ends
if ((ADC_SC3_REG(ADC0_BASE_PTR) & ADC_SC3_CALF_MASK) == ADC_SC3_CALF_MASK)
{
return(FALSE); //Calibration failed.
}
ad_calib = ADC0_CLP0; // Add results
ad_calib += ADC0_CLP1; // Add results
ad_calib += ADC0_CLP2; // Add results
ad_calib += ADC0_CLP3; // Add results
ad_calib += ADC0_CLP4; // Add results
ad_calib += ADC0_CLPS; // Add results
ad_calib /= 2; // divide by 2
ad_calib |= 0x8000; // Set MSB
ADC0_PG = ad_calib;
ADC_SC3_REG(ADC0_BASE_PTR) &= ~ADC_SC3_CAL_MASK; /* Clear CAL bit */
return(TRUE); //Calibration successful
}
The remaining setup code calls this calibration function as below:
if(adc_calibrate()) //If we have a successful calibration
{
ADC0_SC1A |= ADC_SC1_AIEN_MASK; //Enable interrupt
ADC0_CFG1 |= ADC_CFG1_MODE(1); // Set 12 bit mode
ADC0_SC3 &= (uint8_t)(~ADC_SC3_AVGE_MASK); //Disable averaging
//ADC0_SC3 = ADC_SC3_AVGE_MASK | ADC_SC3_AVGS(0); //4 samples
}
After this the conversion is started with:
ADC0_SC1A = ( CHANNEL_TO_MEASURE & ADC_SC1_ADCH_MASK) | ADC_SC1_AIEN_MASK; //Start next conversion!
The result is extracted from the interrupt with
value = ADC0_RA;
I hope this was what you were looking for. In case it makes any difference, the MCU runs on 3.3V, and the analog reference voltage is 2.048V.
Thanks for the help!
Kind regards
Lars