Domanda

This is a question concerning cross-platform consistency and determinism of floating point operations (IE yield different results on different CPUs/sysetms)

Which one is more likely to stay cross-platform consistent(pseudo code):

float myFloat = float ( myInteger) / float( 1024 )

or

float myFloat = float ( myInteger ) / float( 1000 )

Platforms are C# and AS3.

.

AS3 versions:

var myFloat:Number = myInteger  /  1000 // AS3
var myFloat:Number = myInteger  /  1024 // AS3

- Ok I've added AS3 version for clarification, which is equivalent to the 'C pseudo code' above . As you can see in AS3 all calculations, even on integers, are performed as Floats automatically, a cast is not required ( and nor can you avoid it or force the runtime to perform true integer divisions ) Hopefully this explains why im 'casting' everything into Floats: I am not! that just simply what happens in one of the target languages!

È stato utile?

Soluzione

The first one is likely the same on both platforms, since there are no representation issues. In particular for small integers (highest 8 bits unused) there is one exact result, and it's very likely that this result will be used.

But I wouldn't rely on it. If you need guaranteed determinism, I recommend implementing the required arithmetic yourself on top of plain integers. For example using a fixed point representation.

The second one is likely to be inconsistent, even when using the same C# code on different hardware or .net versions. See the related question Is floating-point math consistent in C#? Can it be?

Altri suggerimenti

I suggest you read the IEEE 754-1985 standard. A copy can be purchased for $43. Although superseded by the 2008 version, it is an excellent introduction to floating-point because it is only 20 pages and is quite readable. It will show you why both dividing by 1000 and by 1024 are deterministic and why the former may have error but the latter does not (except in cases of underflow). It will also give you a basis for understanding the answers you have been given and why you are on the wrong track.

Which one is more likely to stay cross-platform consistent(pseudo code):

Dividing by 1024. Every binary-based floating point systems (which are IEEE754, IBM, VAX, Cray) which applies division by 1024 to all finite numbers will yield exact results in the given representation. The reason is that dividing by 1024 is equivalent to

  • shifting the bits 10 positions to the right which means
  • decreasing the binary exponent by 10

If the number is too small (for IEEE754 1E-38/1E-308), you will lose an exact result, but this is not a problem of the operation, but of the limited range of the number...it simply cannot display such small results accurately.

As no rounding is necessary, there can be no difference due to rounding (and yes, while most programming languages use round to even, some enable choosing another rounding mode).

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top