LPCXpresso fixed point arithmetic support

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

LPCXpresso fixed point arithmetic support

2,756 Views
svensavic
Contributor III

Is there an implementation for fixed point arithmetic in LPCXpresso ? As gcc should support it. When I include stdfix.h I only get mapping from fract -> _Fract and from accum to _Accum, but still saying that those are not valid types 

error: '_Accum' does not name a type

I am using LPC812 which doesnt have FP and softFloat is bloatware as it takes 4kb of flash and 800bytes to do simple floating arithmetic. 

Am I missing something with project settings ?

0 Kudos
9 Replies

1,996 Views
lpcxpresso_supp
NXP Employee
NXP Employee

Or trying a different track, if the real issue is the proportion of your flash that is consumed by floating point support code, then maybe you should consider a part with more flash (for instance the LPC82x family offer double the flash of the LPC81x), or even look at an MCU with Cortex-M4 with floating point hardware - like LPC407x/8x, LPC43xx, LPC541xx (or one of the Kinetis M4-based parts).

Regards,

LPCXpresso Support

0 Kudos

1,996 Views
svensavic
Contributor III

Cant do that, cause my board is designed around LPC812 tssop16. No drop in replacement for that chip with more flash.

0 Kudos

1,996 Views
converse
Senior Contributor V

ISO specified a decimal extension to C, TR 24732, and to C++, TR 24733. It is not yet part of any published C++ Standard. GCC provides built-in types and a library implementation of it (but not provided in MCUXpresso, it appears).  The most recent push for having this included in C++ is here.

Having said that, it seems you have made a poor choice of MCU for your project - trying to use a Cortex-M0 with 16k flash and 4k RAM for a C++ project that needs floating point arithmetic is 'ambitious' to say the least...An M4-based MCU would have been a much better choice.

Best of luck in finding something that works for you.

0 Kudos

1,996 Views
lpcxpresso_supp
NXP Employee
NXP Employee

The use of a floating point support library on Cortex-M based MCUs (other than Cortex-M4) is a long standing practice across all toolchains, not just LPCXpresso IDE. ARM chose to reduce silicon cost in the CPU - at the cost of memory requirements in applications that do require some float support.

With a simple test program carrying out a divide operation and an add operation, using (the open source) NewlibNano I see code size go from 928 bytes to 2420 bytes switching from int to float variables (so an increase of ~1.5KB).

But using Redlib the code sizes are 908 and 1658 (so closer to 0.6KB). Thus one way to save size will be to use Redlib - though you won't be able to do this if you applications actually does use C++.

I imagine that part of the difference could be down to the granularity of the way the code of the floating point library has been written in each case. This may mean that as you use more floating point operations, the size difference of Redlib vs NewlibNano may decrease - though I haven't actually checked this. [Redlib uses our own floating point library, not the one from Newlib/NewlibNano.]

Note my tests were actually done using MCUXpresso IDE, but results should be close to identical with LPCXpresso IDE v8.2. 

Regards,

LPCXpresso Support

0 Kudos

1,996 Views
svensavic
Contributor III

As you can see by yourself it is quite inefficient to use floating point arithmetic on CortexM0. That is why I am trying to switch to approximations and fixed point math. I cant even fit atan, acosf, sqrtf, pow to 16kb flash! And that is without even touching my own algorithms.. I could probably survive with 

#define F2FP(a) (a*65536)
#define FP2F(a) (a/65536.0f)

and then use 16.16 arithmetic, but that would link fadd and fsub, and I wanted to use built in Accum and  Fract, which is obviously not ported to GCC version that is used by LPCXpresso, which is really unfortunate.

FixedPointArithmetic - GCC Wiki 

0 Kudos

1,996 Views
converse
Senior Contributor V

Fixed point arithmetic types are supported in the GCC shipped with MCUXpresso - provided you choose the right language dialect - you will need to use GNU C 11 or ISO C11 (this is the first 'page' of the GCC compiler settings).

However, that won't help you as the fixed math library is missing... When linking, you get errors like

undefined reference to `__gnu_addsa3'

However, you may be able to use something like libfixmath: libfixmath - Wikipedia (or find and build the GCC fixed point library yourself).

0 Kudos

1,996 Views
svensavic
Contributor III

I cant even compile the code, as it still doesnt translate _Accum as a valid type. I tried MCUXpresso, all combinations of GNU/ISO C++ without any luck.

I just noticed you said ISO C11, do you mean that only C supports FP while C++ doesnt ?

I tried libfixmath. It is really bad :smileyhappy: When I included atan from fix16_trig, it spread all over my nonexisting 40kB or RAM ! I guess LUT is too heavy. And libfixmath is using float to 16.16FP, which needs fadd and fsub.

0 Kudos

1,996 Views
lpcxpresso_supp
NXP Employee
NXP Employee

You will probably need to switch from the default Redlib C library to Newlib (or NewlibNano) to use fixed point support.

But that aside, I wouldn't expect you to see 4KB code size increase from using float operations. Check the map file generated by the linker inside your project to see exactly what is being pulled in from where.

Note that if you switch from using integer only printf to floating point compatible printf, you might see such a code size increase though.

Regards,

LPCXpresso Support

0 Kudos

1,996 Views
svensavic
Contributor III

I am using NewlibNano (No Host), linker and C/C++. Do I need to set language standard to GNU11 or something ?

Just defining float variable and do the "a/b" I get increase of around 2.2kB on flash.

By looking at the object size using : "arm-none-eabi-nm --size-sort" 

000002ec T __aeabi_fadd
0000030c T __aeabi_fsub

I can see float add and sub are using 1528 bytes.I am not even using printf. Its just:

float a, b, c, d;

a = 1.1f;

b = 2.2f;

c = a/b;

d = c+a;

That is ofc trivial example, but it just cant justify 2.2kB flash increase.

0 Kudos