Daniel White

Help with preprocessor math

Discussion created by Daniel White on Jun 22, 2006
Latest reply on Jun 27, 2006 by CompilerGuru
I have the following macro:
#define D_GAIN  ((32 * 16 * 1000 )/ (50 * 16))
When I use it, I always typecast for whatever size I am going into like:
gain = (uWord)D_GAIN;
 
but it refuses to pre-process correctly. In this case, it should be 640 but it comes out as 65521. It appears that it wants me to typecast to prevent overflow when it multiplies 32 * 16 * 1000. If I do it like this, it works....
#define D_GAIN  (((uLong)32 * (uLong)16 * (uLong)1000 )/ (uLong)(50 * 16))
This suggests that the preprocessor is trying to do this math as a 16 bit number. Shouldn't the preprocessor use floating point math, if necessary for the numbers in #defines up until they are needed by the compiler? Is there some compiler options setting that turns off floating point math in the preprocessor?
 
-Dan

Outcomes