Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

4
  • 3
    The pattern of using higher precision for temporary calculations and rounding values at well-defined points for longer-term storage allows code to be faster and more accurate than code which uses the same precision all the time, but for it to be usable there must be a way of storing temporary values with full precision. The extended-precision double got a bad reputation because C botched the semantics of long-double argument passing, leading compilers to cheat and make "long double" and "double" synonymous, so e.g. long double dsquared = (x-y)*(x-y) wouldn't yield the same result as... Commented Sep 30, 2020 at 14:58
  • 2
    ...long double delta=x-y, dsquared=delta*delta;. If ANSI C hadn't botched floating-point argument passing, and had specified that within calculations, float promotes to long float (whose size may be anything from float to long double) and double to ` long double`, that could have allowed greatly improved math performance on 16-bit and 32-bit platforms without hardware floating-point support, for which unpacked 48-bit and 80 (or 96)-bit types would be more efficient than packed 32-bit or 64-bit ones. Commented Sep 30, 2020 at 15:01
  • @supercat I am not disputing the semantics of more a accurate calculation. I am addressing the point in the question of sse being well received when sse was brought in, performance critical code couldnt run on the x86 and sameness as risk was far more useful than any minor benefit here Commented Sep 30, 2020 at 17:36
  • 1
    The way many compilers treated 80-bit types would throw out the window the "sameness" of any code using them. Commented Sep 30, 2020 at 18:09