-1

Possible Duplicate:
Is JavaScript’s Math broken?

I wrote some simple C# code that runs Python code dynamically (already implemented):

string code = @"100 * 2 + 4 / 3";
ScriptEngine engine = Python.CreateEngine();
ScriptSource source = engine.CreateScriptSourceFromString(code, SourceCodeKind.Expression);
int res = source.Execute<int>();
Console.WriteLine(res);

And then I thought about Javascript, and that there are core differences between C# and JS. For example:

In JS:

var t=1.02+1.01 = 2.0300000000000002;

And then I tried this via Jint:

var script = @"
  function add( ) {
    return 1.02 + 1.01;
  };
  return add();";
  var result = new JintEngine().Run(script);
  Console.WriteLine(result);

The result was:

enter image description here

Maybe I don't see the whole picture, but if a programmer on the other side of the world sends me his script file, I (and him) expect the result to be consistent! (Let's ignore the problematic base 2 representation for now, I'm talking about consistency).

If I was mistaken, in what scenario would I use running other code on .Net? (I will have to be very very suspicious for every line of code...)

Am I right ?

another Example :

   var script = @"
                               function show( )
                               {
                                      return  parseInt('123asd'); //in js it's 123
                               };
                               return show();";

                var result = new JintEngine().Run(script);
                Console.WriteLine(result);

result :

enter image description here

How can I trust a script to yield the same expected result ? ( or am I getting this whole situation wrong...?)

7
  • just to make clear you saying that you see the problem in js with extra decimal points 2.0300000000000002;
    – COLD TOLD
    Commented Dec 2, 2012 at 6:35
  • @COLDTOLD this is one example. I can fill this whole page with inconstant result of execution....( and that's what i'm afraid from)
    – Royi Namir
    Commented Dec 2, 2012 at 6:36
  • 1
    I think if the result is of just by decimal poitns 2.0300 and the other one is 2.03 it theoretically the same and both parties have to agree on decimal point but if the actulal number is off like 3.0300 and the number is 2.0300 than it a problem it might also be that the I ronPython implementation might be rounding number while js just gives you a huge decimal point result
    – COLD TOLD
    Commented Dec 2, 2012 at 6:40
  • @paulsm4 my question talks about the same code , different environment inconsistency. I don't understand why you suggest the question to be closed.
    – Royi Namir
    Commented Dec 2, 2012 at 7:05
  • @paulsm4 see my edit. is this also releated to broken math in JS ?
    – Royi Namir
    Commented Dec 2, 2012 at 7:10

2 Answers 2

2

I don't see any "inconsistency". That's just how floating point numbers work!

1) The value - by definition - is seldom "exact"

2) The representation (e.g. from Printing out to a string) can be totally misleading if you try to print out more digits than your value has precision :)

Sample C code

#include <stdio.h>

int 
main ()
{
  double d = 1.02 + 1.01;
  float f = 1.02 + 1.01;
  printf ("d=%lf, f=%f\n", d, f);
  printf ("d=%25.20lf, f=%25.20f\n", d, f);
  return 0;
}

Sample output:

d=2.030000, f=2.030000
d=   2.03000000000000020000, f=   2.02999997138977050000

Excellent article:

Finally, please read this discussion:

The problem is NOT Python, it's NOT C#, it's NOT Javascript.

I assure you :)

3
  • if js shows A and c# shows B , for me thats incosistancy. This can crash the whole Program. :)
    – Royi Namir
    Commented Dec 2, 2012 at 6:46
  • p.s. I already read this article several times. I know exactly how it works. :-)
    – Royi Namir
    Commented Dec 2, 2012 at 6:48
  • do you realize that this kind of inconsistency can crash your program....do you ? ( that's what my whole question is about)
    – Royi Namir
    Commented Dec 2, 2012 at 6:53
2

This is no error, what you are witnessing is how computers store decimal numbers. This subject is rather complicated... Computers cannot store the exact numeral for a decimal number, when you added together 1.01 and 1.02, you got exactly what the computer thought the result was.

It is not unusual for you to get this. What i suggest you do in this situation is round the number to the decimal placement you want (e.g. the hundredths placement, for you). But this will only work when dealing with larger numbers. When you want to deal with smaller, more precise decimals, you are going to have to either deal with it, or get a super computer.

4
  • Didn't I already write that ? ( the base 2 representation). I can provide more example of incosistency. if I have another code , I dont want start dealing with corrections. otherwise , let me build it myself.
    – Royi Namir
    Commented Dec 2, 2012 at 6:40
  • Computers can store decimal numbers exactly, but they can't use a double for it. For example .net has the System.Decimal type which you can use if you need exact representation of decimal numbers. Commented Dec 2, 2012 at 9:33
  • @Royi Namir - you're simply mistaken in your understanding of how things "should" work. Nikku Aisuru is absolutely correct. All you have to do to insure "consistent" results is limit the #/decimal places you're printing out in your string!
    – paulsm4
    Commented Dec 2, 2012 at 17:48
  • @paulsm4 did you see my second sample ( the edit) ?
    – Royi Namir
    Commented Dec 2, 2012 at 18:19

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.