In most (all?) of the LPC series the ADC performance specification figures include gain error (E_G) and absolute error (E_T).
LPC1110/11/12/13/14/15 (10-bit ADC), +/- 0.6 % gain error, +/- 4 LSB absolute error.
LPC1311/13/42/43 (10-bit ADC), +/- 0.6 % gain error, +/- 4 LSB absolute error.
LPC1759/58/56/54/52/51 (12-bit ADC), 0.5 % gain error, 4 LSB absolute error.
LPC2141/42/44/46/48 (10-bit ADC), +/- 0.5 % gain error, +/- 4 LSB absolute error.
LPC2364/65/66/67/68 (10-bit ADC), +/- 0.5 % gain error, +/- 4 LSB absolute error.
With the following definitions:
"The gain error (E_G) is the relative difference in percent between the straight line fitting the actual transfer curve after removing offset error, and the straight line which fits the ideal transfer curve."
"The absolute error (E_T) is the maximum difference between the center of the steps of the actual transfer curve of the non-calibrated ADC and the ideal transfer curve."
Now there have been some posts on ADC specifications on this forum, most notably Pedro Augusto Panecatl Salas' post:
But there slightly different terminology is used.
To me the absolute error (E_T) from the LPC product data sheet looks to be the equivalent of the total unadjusted error (TUE). But this number then seems to be contradict the gain error specification.
A gain error of 0.5 % on a 10-bit ADC amounts to 5 LSB. On a 12-bit ADC it amounts to 20 LSB. both are above the specified absolute error figures.
What numbers should be used to determine the total unadjusted error for the LPC ADCs?