My specific case is an old CW3.1, but I suspect the problem is universal for any CW for HCS12.
I am trying to optimize down some code in an ISR. This is the code:
static uint8 indataCount;
static uint8 indataSize;
else if(indataCount < indataSize+2)
copy = TRUE;
The integer promotions in C are applied twice in this expression. indataSize is first promoted to an int, then incremented by 2. And then the whole expression is again promoted to an int, which would also have happend if the +2 wasn't there, since the C language can't perform any arithmetics on 8-bit integers.
The code translates to the following:
4249: else if(indataCount < indataSize+2)
006e 87 CLRA
006f b745 TFR D,X
0071 f60000 LDAB indataSize
0074 c30002 ADDD #2
0077 3b PSHD
0078 aeb1 CPX 2,SP+
007a 2d13 BLT *+21 ;abs = 008f
And I didn't expect any optimization, the C language sort of enforces this to be done on 16 bit.
However, with a little manual tweak, I get much faster code:
4249: else if(indataCount < (uint8)(indataSize+2))
0071 b60000 LDAA indataSize
0074 8b02 ADDA #2
0076 b10000 CMPA indataCount
0079 221c BHI *+30 ;abs = 0097
Now the compiler has suddenly told the C standard to get lost and skipped the integer promotions (which is fine with me for this specific case). It seems to me that the compiler assumes that the programmer is clueless about the integer promotions, and decides to abandon the C standard at its own whim, in a non-consistent way.
1. When will the compiler know where it can skip integer promotions in my code and optimize? Where is this non-standard behavior documented? I can't find anything about it in the compiler docs.
2. Why didn't the compiler not optimize the indataSize+2 subexpression while it did optimize the uint8 < uint8 one?
3. Is there a compiler optimizing setting that toggles this behavior on/off?