Int to float precision loss in 64bit environment

From: Date: Thu, 14 Jan 2010 00:30:04 +0000
Subject: Int to float precision loss in 64bit environment
Groups: php.internals 
Request: Send a blank email to internals+get-46721@lists.php.net to get a copy of this message
Hey list:


I noticed an issue today running our ZF unit tests, it primarily came down to this issue:

On 32 bit environments ..

~# php -r "var_dump(PHP_INT_MAX+1 > PHP_INT_MAX);"

.. will return true, whereas in 64bit environments, this returns false. In talking with Stas, it seems that since PHP_INT_MAX+1 is pushed into a (float) and since float values are stuffed into 52bit mantissa / 11 bits for exponents, we are loosing some precision.  That lost precisions is causing the above to fail in one platform, and work in another.

Is this noted somewhere?  Is there a workaround? Or, is this something that can be fixed for 64 bit platforms (somehow)?

This is not a big issue for us since (it seems) it's strictly a value that is being created for unit testing and I am not sure if it's something people use in code, or even see in the wild in general. I can get around it by ensuring the actual value is large enough to change the minimal precision value using PHP_INT_MAX+1025.

It't not until you've pushed past a 1024 buffer/(integer loss in precision?) until things start working again as expected:

~# php -r "var_dump(PHP_INT_MAX+1 > PHP_INT_MAX);"
bool(false)
~# php -r "var_dump(PHP_INT_MAX+1024 > PHP_INT_MAX);"
bool(false)
~# php -r "var_dump(PHP_INT_MAX+1025 > PHP_INT_MAX);"
bool(true)


Any information would be great.

Thanks,
Ralph Schindler



Thread (2 messages)

« previous php.internals (#46721) next »