K64 - ARM Release vs Debug Code

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

K64 - ARM Release vs Debug Code

953 Views
JHinkle
Senior Contributor I

For the last 40+ years, I never put released code into production.

I felt all the debug work was done in debug.  The product worked in debug.  The last thing I wanted was to compile in release mode and then end up with an issue in the field that did not exist back in the lab.

Now that I'm retired and can play, I decided to take my K64 debug code and compile it into release (optimized) code.

I am using Rowley's Crossworks IDE which uses the GCC software tool chain.

Right out of the start -- the release code failed!!!

Is this common?

I can see a reduction in code size.  I suspect smaller code relates to a speed improvement.  

BUT -- I can't find the issue in release code because the issue does not exist in debug code.

Those of you that place ARM devices into production -- do you leave the code in debug where all of your testing took place or are you comfortable placing release code in the field.

Thanks.

Joe

0 Kudos
7 Replies

666 Views
JHinkle
Senior Contributor I

Thanks Bob.

My reading recently made me aware of issues caused by not using 'volatile' correctly with optimization.

I was aware of what 'volatile' did -- just not the full impact and ramifications when it came to ARM generated code.

Was not aware of the M0 -Os issue.

Thanks again.

Joe

0 Kudos

666 Views
bobpaddock
Senior Contributor III

Always ship the code that you tested the most.  As Erich and Mark said it makes little sense to have multiple targets in the embedded space.

To the actual failure the most common reasons is that something should be marked as 'volatile' which is not, this frequently shows up in changes in optimization levels.

For GCC the code that is generated is completely different between optimization levels, which people tend to blame the compiler for odd behaviors when it is usually dubious practices in the user code.

Again for GCC changes in optimization types such as 'small' -Os vs -O2 for speed can also lead to problems.

On parts such as the L family that does not support unaligned bus access -Os can generate perplexing bus faults as one of the things it does to make 'small' code is not align things such as arrays on 32 bit boundaries.  For parts that do handle unaligned access there could be speed issues that could be critical to an interrupt handler changing the system's timing.

So it would be interesting to know what is actually changing between 'debug' and 'release'?


0 Kudos

666 Views
JHinkle
Senior Contributor I

Thanks Mark.

Joe

0 Kudos

666 Views
JHinkle
Senior Contributor I

My question was a little misleading.

My IDE uses two configurations named "Debug" and "Release" -- most likely to mimic desktop IDE's like Visual Studio.

My question was .... IF you do most of your development using non-optimized code, are you comfortable with and do you switch to optimized code when you release for production?

I have NOT looked a Crosswork's settings for the "Released" configuration but I know it's configured for optimized code.  Yes Mark - I have just used their default settings.

My past 40 years in embedded has been with 8 and 16 bit micros so I'm just getting into 32 bit ARM now that I'm retired and learning all over again.

I just found the first time I engaged optimized code for the K64 -- it's behavior was not as expected nor desired.

So my feeling now is, as long as I don't have a code size issue or performance issue, I'm going to keep all my code being compiled as non-optimized.

Joe

0 Kudos

666 Views
mjbcswitzerland
Specialist V

Joe

Although I beat around the bush a bit with the answer I did highlight the fact that I ONLY work with production code, whether one names it "release" or "debug" in embedded systems.
Specifically I use highest optimisation (as allowed by the project specification, since some don't allow highest due to worries about compiler bugs) for all work (although I know developers who simply can't debug with fully GCC optimised code and so do switch temporarily to debug a particular piece of code).

Certainly I wouldn't be comfortable doing all work with one target and releasing another since it simply hasn't been continuously tested.

Regards

Mark

0 Kudos

666 Views
mjbcswitzerland
Specialist V

Joe

Release/Debug are two project "targets" but what they actually mean depends on the way that the targets are configured.
When using Rowley Crossworks I tend to set up the debug target to run from SRAM (to save flashing each time) and possibly set the optimisation level lower to simplify debugging, but this is just how I tend to set up Crossworks - how you set up the different targets is something that you should know, unless you took the setups from examples and didn't check.

In any case, check the differences in the setups to see whether the linker script is correct in the release case, the level of optimisation, additional defines controlling debug output, etc. You can experiment by making each option the same and then finding out which suddenly causes failure - then identify the reason for this and correct it. Usually the reason is obvious although some optimisations may really cause failure.

I can't speak for all developers but I work with many companies where the developers use debug mode most of the time and change to release only towards the end of the work, or when the code size becomes too large. One major reason also being that they find it difficult to debug optimised GCC since the code jumps around and often variables can't be displayed (there are less problems with IAR, for example, where optimised code still is quite easy to debug at the source level).
For the projects that I develop myself I never use a debug target since I always develop in release mode so that I am always working on final code. When I do target level debugging it is usually in disassembler mode because there are then no debugger surpises. Previous to this I have simulated (almost) everything so the debugger is in fact usually only necessary for some very low level code where one is working mainly with peripheral registers anyway.

Regards

Mark

0 Kudos

666 Views
BlackNight
NXP Employee
NXP Employee

Hi Joe,

My view is (similar to the one of Mark) that for the embedded world there is no 'release' and 'debug', as for a embedded target it is always 'debug'. For me it makes sense to speak about 'debug' and 'release' for host/desktop applications where the binary (with or without debug) information is laded into the memory (see "https://mcuoneclipse.com/2012/06/01/debug-vs-release/ ". So it makes a huge difference on a host if the debug information (many MBytes!) is loaded into the RAM or not. For an embedded (cross development) target the debug information does not get loaded into the target, so I always work in 'debug' mode.

Besides the points Mark made: one common reason why code fails in 'release' (well, optimized) mode is because of optimizations: It changes the timing, so with changed timing you might uncover undetected bugs in your code. Simply because the compiler/libraries do not do special sequences. Variables on the stack might get 'magically' values which do work, while in optimized mode they get random/different values because they are kept in registers. Or somewhere your code depend on some timing between doing different things, now things get changed and it does not work any more.

I hope this helps,

Erich

0 Kudos