0.1 + 0.2 != 0.3 ?
Is it really true that 0.1 + 0.2 is not 0.3 ?
So what is the problem?
This is where the problem starts. 0.1 is not really 0.1 but rather its binary equivalent, which is a near-ish (but not identical) value. So, as soon as you write the values, they lose their precision. You might have just wanted two simple decimals, but what you get is binary floating-point arithmetic. Its sort of like wanting your text translated into Sanskrit but getting Hindi. Similar, but not the same.
So what we should do?
Take the same example
var a = 0.1, b = 0.2, c = 0.3;
var result = ( a + b == c);
we would do this:
var result = ( ( a + b ) > ( c – 0.001 ) ) && ( ( a + b ) < ( c – 0.001 ) );
This says that because 0.1 + 0.2 is not exactly 0.3, check instead that it’s more or less 0.3 specifically, within a range of 0.001 on either side of it.
But the obvious drawback is that, for very precise calculations, this will return inaccurate results. Happy coding :)