Why 0.1 + 0.2 == 0.3 in D?

Floating PointFloating AccuracyDConstantfolding

Floating Point Problem Overview


assert(0.1 + 0.2 != 0.3); // shall be true

is my favorite check that a language uses native floating point arithmetic.

C++

#include <cstdio>
 
int main()
{
   printf("%d\n", (0.1 + 0.2 != 0.3));
   return 0;
}

Output:

1

http://ideone.com/ErBMd

Python

print(0.1 + 0.2 != 0.3)

Output:

True

http://ideone.com/TuKsd

Other examples

Why is this not true for D? As understand D uses native floating point numbers. Is this a bug? Do they use some specific number representation? Something else? Pretty confusing.

D

import std.stdio;
 
void main()
{
   writeln(0.1 + 0.2 != 0.3);
}

Output:

false

http://ideone.com/mX6zF


UPDATE

Thanks to LukeH. This is an effect of Floating Point Constant Folding described there.

Code:

import std.stdio;
 
void main()
{
   writeln(0.1 + 0.2 != 0.3); // constant folding is done in real precision
 
   auto a = 0.1;
   auto b = 0.2;
   writeln(a + b != 0.3);     // standard calculation in double precision
}

Output:

false
true

http://ideone.com/z6ZLk

Floating Point Solutions


Solution 1 - Floating Point

(Flynn's answer is the correct answer. This one addresses the problem more generally.)


You seem to be assuming, OP, that the floating-point inaccuracy in your code is deterministic and predictably wrong (in a way, your approach is the polar opposite of that of people who don't understand floating point yet).

Although (as Ben points out) floating-point inaccuracy is deterministic, from the point of view of your code, if you are not being very deliberate about what's happening to your values at every step, this will not be the case. Any number of factors could lead to 0.1 + 0.2 == 0.3 succeeding, compile-time optimisation being one, tweaked values for those literals being another.

Rely here neither on success nor on failure; do not rely on floating-point equality either way.

Solution 2 - Floating Point

It's probably being optimized to (0.3 != 0.3). Which is obviously false. Check optimization settings, make sure they're switched off, and try again.

Solution 3 - Floating Point

According to my interpretation of the D language specification, floating point arithmetic on x86 would use 80 bits of precision internally, instead of only 64 bits.

One would have to check however that that is enough to explain the result you observe.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionStasView Question on Stackoverflow
Solution 1 - Floating PointLightness Races in OrbitView Answer on Stackoverflow
Solution 2 - Floating PointFlynn1179View Answer on Stackoverflow
Solution 3 - Floating PointJean HominalView Answer on Stackoverflow