> I use macros to avoid replicating literals
How? Macros will not reduce literals, the compiler will just do a "text replacement" when finding such macros, and the result would be the same as if you typed them out directly in the code. Though of course, some constant must be declared inside #define macros, such as array sizes.
The proper solution to reduce literal memory size is otherwise to declare all literals as file scope constants:
static const uint8 SOME_VAL = 123;
If the above is placed at file scope, it will typically be allocated in dedicated ROM and not in program memory. And with compilers like CW you can #pragma it to a desired memory location. This will not only reduce program size but also ease debugging.
> AUTOSAR seems to suggest that macros be used to separate shear layers for portability
I don't know AUTOSAR, but there's no coincidence that chapter 19 is the longest one in the whole MISRA. The whole pre-processor is poorly defined by the C standard.
> Does information hiding worsen or improve readability or maintainability? I think it is possible to have too much or too little, depending. I find that each stakeholder wants ALL information hidden EXCEPT his.
Information hiding is good, but you don't need macros for it.
> So, you would advise that the conventional
> #define setReg8Bits(RegName, SetMask) (RegName |= (byte)(SetMask))
> be a function rather than a macro?
Yes, because functions have stronger typing. You can pass anything to that macro: large ints, pointers, arrays, floats... And if you get any compiler warnings/error, they will likely be of a very bemusing nature. The typecast inside that macro only makes things worse, as it will hide away most such type casting erros.
> Most of my information hiding is done in structs; THAT is what has made the code more readable and easier to change for me. Most of my use of macros has been in definitions of those structs.
This is likely acceptable, what you should avoid is executable code inside macros. One example of common practice of such macros is to reduce the amount of typing for nested structs. Example:
typedef struct { int val; } X;
typedef struct { X x; } Y;
typedef struct { Y y; } Z;
Z Z;
Z.y.x.val = something;
Instead of nested struct notation, the code is often clearer if you write:
#define z_val (z.y.x.val)
...
z_val = something;
The above method is common code in register and data protocol declarations, and I would agree that this particular example increases the readability of the code. Just note the need for parenthesis, the "dot" operator has lower presedence than [] and -> operators (and you need parenthesis for MISRA).
> Also, each object code instruction must also be ultimately two-way traceable to specification and to test data.
Macros and inline functions aren't any different in this case, they will both end up in program memory. If you need to trace the actual object code, I'd say ordinary functions is a must, since you can #pragma allocate them at a known memory location.