Type of long double on ColdFire

cancel
Showing results for 
Search instead for 
Did you mean: 

Type of long double on ColdFire

3,271 Views
Dietrich
Contributor I
Dec 8, 2005, 8:19 AM
Post #1 of 14 (373 views)
Copy Shortcut
 [ColdFire] [RFC] Type of long double on ColdFire  Can't Post 
--------------------------------------------------------------------------------
 
We (CodeSourcery) currently working on developing ColdFire targeted GNU
toolchains (gcc, etc).
Currently gcc nominally uses a 12-byte "extended" precision type for the C
"long double" floating point type. This is inherited from the m68k gcc port,
but doesn't really make a whole lot of sense for ColdFire. It's also broken.
The ColdFire FPU only has 64-bit registers, and the current gcc soft-float
routines are just wrappers round the 64-bit "double" routines.
So, we're proposing changing long double to be something more sensible. There
are two options:
1) Make long double == double. This is what Arm does, amongst others. This
pretty much just works and should reduce the amount of support code required.
Anyone wanting more than IEEE double precision has to use a third party
bugnum/MP/quad library of which there are several, but no standard ABI.
2) Choose a sensible format for long double. The obvious candidate is a
128-bit PPC/MIPS stye almost-quad precision type implemented with a pair of
64-bit doubles. This provides a higher precision type for those that want it
at the expense of additional complexity and support code for those that
don't.
This email is a RFC to try and gauge which of the two options is most useful
to the ColdFire community. ie. are there significant users that would benefit
from (2).
Also if there's anyone who really wants to keep the existing long double we'd
like to hear from you, and why you think it should be kept.
Paul
--------------------------------------------------------------------
Dec 8, 2005, 9:15 AM
Post #2 of 14 (373 views)
Copy Shortcut
 Re: [ColdFire] [RFC] Type of long double on ColdFire [In reply to]  Can't Post 
--------------------------------------------------------------------------------
 
I can't speak for the rest of the community, but we don't (currently)
require the extra precision of long double. Making long double = double
would be OK with us.
Paul Brook wrote:
>This email is a RFC to try and gauge which of the two options is most useful
>to the ColdFire community. ie. are there significant users that would benefit
>from (2).
>
>Also if there's anyone who really wants to keep the existing long double we'd
>like to hear from you, and why you think it should be kept.
>
>Paul
>--------------------------------------------------------------------
--
Brett Swimley
Sr. Design Engineer
Advanced Electronic Designs
406-585-8892
brett DOT swimley AT aedinc DOT net

--------------------------------------------------------------------
Dec 9, 2005, 12:00 AM
Post #3 of 14 (373 views)
Copy Shortcut
 Re: [ColdFire] [RFC] Type of long double on ColdFire [In reply to]  Can't Post 
--------------------------------------------------------------------------------
 
> We (CodeSourcery) currently working on developing ColdFire targeted GNU
> toolchains (gcc, etc).
>
> Currently gcc nominally uses a 12-byte "extended" precision type for the C
> "long double" floating point type. This is inherited from the m68k gcc
port,
> but doesn't really make a whole lot of sense for ColdFire. It's also
broken.
> The ColdFire FPU only has 64-bit registers, and the current gcc soft-float
> routines are just wrappers round the 64-bit "double" routines.
>
> So, we're proposing changing long double to be something more sensible.
There
> are two options:
>
> 1) Make long double == double. This is what Arm does, amongst others. This
> pretty much just works and should reduce the amount of support code
required.
> Anyone wanting more than IEEE double precision has to use a third party
> bugnum/MP/quad library of which there are several, but no standard ABI.
>
That would certainly seem the most sensible solution. It is, I think, rare
for embedded systems to need more than double precision (it's only a
minority that need any kind of floating point at all), and any system that
does is unlikely to be using a ColdFire. The only reason I can think of for
having longer doubles in the m68k gcc port is for older Macs, and I'd be
doubtful if removing support would be noticed by anyone.
mvh.,
David

> 2) Choose a sensible format for long double. The obvious candidate is a
> 128-bit PPC/MIPS stye almost-quad precision type implemented with a pair
of
> 64-bit doubles. This provides a higher precision type for those that want
it
> at the expense of additional complexity and support code for those that
> don't.
>
> This email is a RFC to try and gauge which of the two options is most
useful
> to the ColdFire community. ie. are there significant users that would
benefit
> from (2).
>
> Also if there's anyone who really wants to keep the existing long double
we'd
> like to hear from you, and why you think it should be kept.
>
> Paul
> --------------------------------------------------------------------
 
--------------------------------------------------------------------
Dec 9, 2005, 2:16 AM
Post #4 of 14 (373 views)
Copy Shortcut
 Re: [ColdFire] [RFC] Type of long double on ColdFire [In reply to]  Can't Post 
--------------------------------------------------------------------------------
 
>The only reason I can think of for having longer doubles in the m68k gcc port >is for older Macs, and I'd be doubtful if removing support would be noticed by >anyone.
IMHO such assumptions are most dangerous. a few might require to operate "old mac's", whatever equipment, for whatever reason. subtract the multimedia, and old systems are operational any time.
imagine this: some programs do not do very much, just they require the newest version of OS/ OS ROMS's, ironically to provide "help systems" supporting graphical images etc. not too required if just one little file to create/edit, even to type in a 5 letter file name manually.
guess i am one who notices missing support (though i do not operate old mac's). now i made a 19 year old wordprocessor working (the manual says it is not a wordprocessor), it origins from CP/M and does not have any fonts and such things.
-alex-
know a superstition? (i urgently need a table of lucky numbers, and why)

---------------------------------
Dec 9, 2005, 4:40 AM
Post #5 of 14 (373 views)
Copy Shortcut
 Re: [ColdFire] [RFC] Type of long double on ColdFire [In reply to]  Can't Post 
--------------------------------------------------------------------------------
 
 
> >The only reason I can think of for having longer doubles in the m68k gcc
port >is for older Macs, and I'd be doubtful if removing support would be
noticed by >anyone.
>
> IMHO such assumptions are most dangerous. a few might require to operate
"old mac's", whatever equipment, for whatever reason. subtract the
multimedia, and old systems are operational any time.
>
It is certainly likely that there will be old systems out there that need
12-byte double support. But do they need the latest versions of the
ColdFire compiler? An old system will use an old version of the compiler
and tools - you don't change toolchains in an existing system without good
reason and lots of checking, and one of the big advantages of gcc is that
you can always get hold of old versions when you need them.
The issue is whether there is likely to be the need for 12-byte (or 16-byte)
doubles on ColdFires in the future, and whether there is existing code that
is in active use and likely to be compiled on new versions of the compilers.
mvh.,
David
 
> imagine this: some programs do not do very much, just they require the
newest version of OS/ OS ROMS's, ironically to provide "help systems"
supporting graphical images etc. not too required if just one little file to
create/edit, even to type in a 5 letter file name manually.
>
> guess i am one who notices missing support (though i do not operate old
mac's). now i made a 19 year old wordprocessor working (the manual says it
is not a wordprocessor), it origins from CP/M and does not have any fonts
and such things.
>
> -alex-
> know a superstition? (i urgently need a table of lucky numbers, and why)
>
>
 
--------------------------------------------------------------------
Dec 9, 2005, 4:55 AM
Post #6 of 14 (373 views)
Copy Shortcut
 RE: [ColdFire] [RFC] Type of long double on ColdFire [In reply to]  Can't Post 
--------------------------------------------------------------------------------
 
Hi,
We use doubles extensively, in a 68040, but nothing longer than 64 bit
doubles. I don't see any need for anything longer, either.
Good luck,
Charlie Kupelian
Bldg F-10
Wallops Flight Facility
Wallops Island VA 23337

-----Original Message-----
On Behalf Of David Brown
Sent: Friday, December 09, 2005 3:01 AM
To: Kupelian Charlie
Subject: Re: [ColdFire] [RFC] Type of long double on ColdFire
> We (CodeSourcery) currently working on developing ColdFire targeted
> GNU toolchains (gcc, etc).
>
> Currently gcc nominally uses a 12-byte "extended" precision type for
> the C "long double" floating point type. This is inherited from the
> m68k gcc
port,
> but doesn't really make a whole lot of sense for ColdFire. It's also
broken.
> The ColdFire FPU only has 64-bit registers, and the current gcc
> soft-float routines are just wrappers round the 64-bit "double"
routines.
>
> So, we're proposing changing long double to be something more
sensible.
There
> are two options:
>
> 1) Make long double == double. This is what Arm does, amongst others.
> This pretty much just works and should reduce the amount of support
> code
required.
> Anyone wanting more than IEEE double precision has to use a third
> party bugnum/MP/quad library of which there are several, but no
standard ABI.
>
That would certainly seem the most sensible solution. It is, I think,
rare for embedded systems to need more than double precision (it's only
a minority that need any kind of floating point at all), and any system
that does is unlikely to be using a ColdFire. The only reason I can
think of for having longer doubles in the m68k gcc port is for older
Macs, and I'd be doubtful if removing support would be noticed by
anyone.
mvh.,
David

> 2) Choose a sensible format for long double. The obvious candidate is
> a 128-bit PPC/MIPS stye almost-quad precision type implemented with a
> pair
of
> 64-bit doubles. This provides a higher precision type for those that
> want
it
> at the expense of additional complexity and support code for those
> that don't.
>
> This email is a RFC to try and gauge which of the two options is most
useful
> to the ColdFire community. ie. are there significant users that would
benefit
> from (2).
>
> Also if there's anyone who really wants to keep the existing long
> double
we'd
> like to hear from you, and why you think it should be kept.
>
> Paul
> --------------------------------------------------------------------

 
 
 

Message Edited by Dietrich on 04-04-2006 09:11 PM

Labels (1)
0 Kudos
1 Reply

129 Views
Dietrich
Contributor I
This message contains an entire topic ported from the WildRice - Coldfire forum.  Freescale has received the approval from the WildRice administrator on seeding the Freescale forum with messages.  The original message and all replies are in this single message. We have seeded this new forum with selected information that we expect will be of value as you search for answers to your questions.  Freescale assumes no responsibility whatsoever with respect to Posted Material.  For additional information, please see the Terms of Use - Message Boards and Community Forums.  Thank You and Enjoy the Forum!
 

Dec 9, 2005, 5:36 AM
Post #7 of 14 (373 views)
Copy Shortcut
 RE: [ColdFire] [RFC] Type of long double on ColdFire [In reply to]  Can't Post 
--------------------------------------------------------------------------------
 
  "It is certainly likely that there will be old systems out there that
need 12-byte double support. But do they need the latest versions of the
ColdFire compiler? An old system will use an old version of the
compiler and tools - you don't change toolchains in an existing system without
good reason and lots of checking, and one of the big advantages of gcc is
that you can always get hold of old versions when you need them.
The issue is whether there is likely to be the need for 12-byte (or
16-byte) doubles on ColdFires in the future, and whether there is existing code
that is in active use and likely to be compiled on new versions of the
compilers."
----------------
i agree, and do not need it personally. if people read "no one requires this anymore", they potentialy derive a legitimation to cut out support for other older systems.
 
 
Dec 9, 2005, 10:07 AM
Post #8 of 14 (373 views)
Copy Shortcut
 [ColdFire] Re: [RFC] Type of long double on ColdFire [In reply to]  Can't Post 
--------------------------------------------------------------------------------
 
On Fri, Dec 09, 2005 at 09:00:32AM +0100, David Brown wrote:
> The only reason I can think of for having longer doubles in the m68k
> gcc port is for older Macs, and I'd be doubtful if removing support
> would be noticed by anyone.
I'm sure the m68k hackers running NetBSD would...
--
Aaron J. Grier | Frye Electronics, Tigard, OR |
--------------------------------------------------------------------
Dec 9, 2005, 10:55 AM
Post #9 of 14 (373 views)
Copy Shortcut
 Re: [ColdFire] Re: [RFC] Type of long double on ColdFire [In reply to]  Can't Post 
--------------------------------------------------------------------------------
 
 
>> The only reason I can think of for having longer doubles in the m68k
>> gcc port is for older Macs, and I'd be doubtful if removing support
>> would be noticed by anyone.
>
>I'm sure the m68k hackers running NetBSD would...
Remember, we're talking about ColdFire here, not the m68k...
I don't see any problem of having "long double" be treated as a
"double" when using '-m5xxx' switch to generate code for a ColdFire,
and leave it alone when using a "-m6xxx' switch to generate code for a
68k.
--
Peter Barada
--------------------------------------------------------------------
Dec 9, 2005, 11:32 AM
Post #10 of 14 (373 views)
Copy Shortcut
 Re: [ColdFire] Re: [RFC] Type of long double on ColdFire [In reply to]  Can't Post 
--------------------------------------------------------------------------------
 
On Friday 09 December 2005 18:07, Aaron J. Grier wrote:
> On Fri, Dec 09, 2005 at 09:00:32AM +0100, David Brown wrote:
> > The only reason I can think of for having longer doubles in the m68k
> > gcc port is for older Macs, and I'd be doubtful if removing support
> > would be noticed by anyone.
>
> I'm sure the m68k hackers running NetBSD would...
I'm not suggesting changing the m68k definition of long double, only ColdFire.
AFAIK NetBSD doesn't support ColdFire, and even if it did, it would be
separate from the existing m68k port. Am I missing something?
Paul
--------------------------------------------------------------------
Dec 9, 2005, 4:52 PM
Post #11 of 14 (371 views)
Copy Shortcut
 [ColdFire] Re: [RFC] Type of long double on ColdFire [In reply to]  Can't Post 
--------------------------------------------------------------------------------
 
On Fri, Dec 09, 2005 at 07:32:38PM +0000, Paul Brook wrote:
> On Friday 09 December 2005 18:07, Aaron J. Grier wrote:
> > On Fri, Dec 09, 2005 at 09:00:32AM +0100, David Brown wrote:
> > > The only reason I can think of for having longer doubles in the
> > > m68k gcc port is for older Macs, and I'd be doubtful if removing
> > > support would be noticed by anyone.
> >
> > I'm sure the m68k hackers running NetBSD would...
>
> I'm not suggesting changing the m68k definition of long double, only
> ColdFire.
ahh. OK after doing some reading things are a little clearer.
could it be switchable? i386 has -m96bit-long-double and
-m128bit-long-double.
> AFAIK NetBSD doesn't support ColdFire, and even if it did, it would be
> separate from the existing m68k port. Am I missing something?
NetBSD has separate kernel ports for the various 68k machines, but they
are binary compatible at the application level. if I had a v4 eval
board with FPU at home, I'd certainly be attempting a port, if for no
other reason than to do bulkbuilds of 68k binaries at 200+MHz rather
than 50.
in general I'm a bit grumpy about the current orthagonal-ness of
coldfire support being added to gcc without updating support for
existing 68k processors. I'd like to see a common 68k target that would
be least-common-denominator compatible across all 68k and coldfire
variants, not for any particular application, but as a proof-of-concept
that the changes being made to gcc are portable across the various 68k
implementations, and that necessary flexibility to handle the variants
is being built-in rather than bolted-on.
I realize I'm in the minority in this. I've heard a lot of whining from
Bernie and Peter on this point, but I still see it as an issue that
needs a better answer than "nobody uses the old stuff, just ignore it"
which leaves gcc support for older (still shipping!) 68k processors
stagnant, and I'd hate to see the same thing being repeated in the
future. (oh, nobody uses v2 cores anymore...)
for better or worse, Motorola didn't provide full backwards
compatibility in the 68k line, and while the coldfire series seems to
have improved in that respect, we now have MMU, eMAC, and FPU extentions
on newer coldfire cores... if these variants can be handled surely these
mechanisms can be integrated with the older CPUs too. who's to say that
freescale doesn't decide to add 96-bit extended precision floating point
into the FPU at a future date?
--
Aaron J. Grier | Frye Electronics, Tigard, OR |
--------------------------------------------------------------------
Dec 9, 2005, 6:46 PM
Post #12 of 14 (371 views)
Copy Shortcut
 Re: [ColdFire] Re: [RFC] Type of long double on ColdFire [In reply to]  Can't Post 
--------------------------------------------------------------------------------
 
On Saturday 10 December 2005 00:52, Aaron J. Grier wrote:
> On Fri, Dec 09, 2005 at 07:32:38PM +0000, Paul Brook wrote:
> > On Friday 09 December 2005 18:07, Aaron J. Grier wrote:
> > > On Fri, Dec 09, 2005 at 09:00:32AM +0100, David Brown wrote:
> > > > The only reason I can think of for having longer doubles in the
> > > > m68k gcc port is for older Macs, and I'd be doubtful if removing
> > > > support would be noticed by anyone.
> > >
> > > I'm sure the m68k hackers running NetBSD would...
> >
> > I'm not suggesting changing the m68k definition of long double, only
> > ColdFire.
>
> ahh. OK after doing some reading things are a little clearer.
> could it be switchable? i386 has -m96bit-long-double and
> -m128bit-long-double.
I'm not keen on this idea. IMHO ABI breaking options tend to be of very little
practical use, especially on targets like Linux and *BSD where binaries are
expected to be portable between different configs/machines. The conversation
usually goes something like:
"I compiled with -mfoo and my program broke"
"Yes. You also need to recompile the rest of your system/libc with -mfoo"
"Meh. maybe I'll not bother".
> > AFAIK NetBSD doesn't support ColdFire, and even if it did, it would be
> > separate from the existing m68k port. Am I missing something?
>
> NetBSD has separate kernel ports for the various 68k machines, but they
> are binary compatible at the application level. if I had a v4 eval
> board with FPU at home, I'd certainly be attempting a port, if for no
> other reason than to do bulkbuilds of 68k binaries at 200+MHz rather
> than 50.
>
> in general I'm a bit grumpy about the current orthagonal-ness of
> coldfire support being added to gcc without updating support for
> existing 68k processors. I'd like to see a common 68k target that would
> be least-common-denominator compatible across all 68k and coldfire
> variants, not for any particular application, but as a proof-of-concept
> that the changes being made to gcc are portable across the various 68k
> implementations, and that necessary flexibility to handle the variants
> is being built-in rather than bolted-on.
Well, to be honest m68k and ColdFire are fairly different architectures.
The basic instruction format is the same, but the supported addressing modes,
FPU, MAC, MMU and exception model are all different.
There have been suggestions (though AFAIK no actual patches) that 68k and
ColdFire should actually be two separate gcc ports, rather than trying to
support them both in the same port.
Running 68k code on ColdFire may be theoretically possible (I haven't checked
all the details), but it would require trapping and emulating a *lot* of
instructions. I wouldn't be surprised if your 200MHz ColdFire ends up going
slower than your 50MHz 68k. I don't know if it's even possible to run
ColdFire binaries on a 68k machine.
[Getting offtopic now]
> I realize I'm in the minority in this.  I've heard a lot of whining from
> Bernie and Peter on this point, but I still see it as an issue that
> needs a better answer than "nobody uses the old stuff, just ignore it"
> which leaves gcc support for older (still shipping!) 68k processors
> stagnant, and I'd hate to see the same thing being repeated in the
> future.  (oh, nobody uses v2 cores anymore...)
The only way to avoid that is to provide the resources (ie. programmers or
money to hire programmers) to maintain support for the "older stuff". whining
just irritates the people you want to help you :smileyhappy:
Paul
--------------------------------------------------------------------
Dec 12, 2005, 11:22 AM
Post #13 of 14 (362 views)
Copy Shortcut
 [ColdFire] Re: [RFC] Type of long double on ColdFire [In reply to]  Can't Post 
--------------------------------------------------------------------------------
 
On Sat, Dec 10, 2005 at 02:46:01AM +0000, Paul Brook wrote:
> On Saturday 10 December 2005 00:52, Aaron J. Grier wrote:
> > ahh. OK after doing some reading things are a little clearer.
> > could it be switchable? i386 has -m96bit-long-double and
> > -m128bit-long-double.
>
> I'm not keen on this idea. IMHO ABI breaking options tend to be of
> very little practical use, especially on targets like Linux and *BSD
> where binaries are expected to be portable between different
> configs/machines. The conversation usually goes something like:
> "I compiled with -mfoo and my program broke"
> "Yes. You also need to recompile the rest of your system/libc with -mfoo"
> "Meh. maybe I'll not bother".
yet changing the size of long double breaks potential ABI compatibility
between coldfire and m68k.
> Well, to be honest m68k and ColdFire are fairly different
> architectures. The basic instruction format is the same, but the
> supported addressing modes, FPU, MAC, MMU and exception model are all
> different.
>
> There have been suggestions (though AFAIK no actual patches) that 68k
> and ColdFire should actually be two separate gcc ports, rather than
> trying to support them both in the same port.
isn't the MIPS family at a greater level of complexity? in the ia32
architecture there are similiar issues with SSE/3dnow/etc.
> Running 68k code on ColdFire may be theoretically possible (I haven't
> checked all the details), but it would require trapping and emulating
> a *lot* of instructions. I wouldn't be surprised if your 200MHz
> ColdFire ends up going slower than your 50MHz 68k. I don't know if
> it's even possible to run ColdFire binaries on a 68k machine.
trapping unsupported instructions is a separate issue from a common 68k
ABI, which issue already exists both in the m68k and coldfire worlds.
v4 with FPU (no matter the long double representation) will have to be
emulated on v3 without FPU.
> The only way to avoid that is to provide the resources (ie.
> programmers or money to hire programmers) to maintain support for the
> "older stuff". whining just irritates the people you want to help you
> :smileyhappy:
I thought the whole point of the original post was a request for
comments. I'm commenting.
--
Aaron J. Grier | Frye Electronics, Tigard, OR |
--------------------------------------------------------------------
Dec 12, 2005, 11:44 AM
Post #14 of 14 (362 views)
Copy Shortcut
 Re: [ColdFire] Re: [RFC] Type of long double on ColdFire [In reply to]  Can't Post 
--------------------------------------------------------------------------------
 
On Monday 12 December 2005 19:22, Aaron J. Grier wrote:
> On Sat, Dec 10, 2005 at 02:46:01AM +0000, Paul Brook wrote:
> > On Saturday 10 December 2005 00:52, Aaron J. Grier wrote:
> > > ahh. OK after doing some reading things are a little clearer.
> > > could it be switchable? i386 has -m96bit-long-double and
> > > -m128bit-long-double.
> >
> > I'm not keen on this idea. IMHO ABI breaking options tend to be of
> > very little practical use, especially on targets like Linux and *BSD
> > where binaries are expected to be portable between different
> > configs/machines. The conversation usually goes something like:
> > "I compiled with -mfoo and my program broke"
> > "Yes. You also need to recompile the rest of your system/libc with -mfoo"
> > "Meh. maybe I'll not bother".
>
> yet changing the size of long double breaks potential ABI compatibility
> between coldfire and m68k.
>...
> trapping unsupported instructions is a separate issue from a common 68k
> ABI, which issue already exists both in the m68k and coldfire worlds.
> v4 with FPU (no matter the long double representation) will have to be
> emulated on v3 without FPU.
Ok, let me ask a different question.
Do you think m68k and Coldfire are the same architecture?
Do you expect the be able to link together (and run) a mixture of ColdFire and
m68k code in a single binary?
It was my impression that while there are many similarities, the two
instruction sets are sufficiently different (especially when you include the
FPU) that they are not interchangeable.
If they are effectively different architectures (ie. mixing the two never
happens) then whats the point having ABI compatibility?
As an example x86-64 and i386 are very similar instruction sets. However you
can't mix the two in the same application, so there's no point having a
common ABI.
Paul

Message Edited by Dietrich on 04-03-2006 10:50 AM

Message Edited by Dietrich on 04-04-2006 09:09 PM

0 Kudos