Floating-point arithmetic

0.1 + 0.2 != 0.3 ?

This is a classic problem that a javascript programmer might face.

Is it really true that 0.1 + 0.2 is not 0.3 ?

Yes,

So what is the problem?

This has to do with an issue called machine precision. When JavaScript tries to execute the line above, it converts the values to their binary equivalents.

This is where the problem starts. 0.1 is not really 0.1 but rather its binary equivalent, which is a near-ish (but not identical) value. So, as soon as you write the values, they lose their precision. You might have just wanted two simple decimals, but what you get is binary floating-point arithmetic.   Its sort of like wanting your text translated into Sanskrit but getting Hindi. Similar, but not the same.

So what we should do?

Take the same example

var a = 0.1, b = 0.2, c = 0.3;

rather than…

var result  =  ( a + b == c);

we would do this:

var result  =  (  ( a + b ) > ( c – 0.001 ) ) && ( ( a + b ) < ( c – 0.001 ) );

This says that because 0.1 + 0.2 is not exactly 0.3, check instead that it’s more or less 0.3 specifically, within a range of 0.001 on either side of it.

But the obvious drawback is that, for very precise calculations, this will return inaccurate results. Happy coding :)