Question

Can anyone help me understand where I am going wrong here. Why is this happening:

long a = (long)((720000 + 144000) * 0.285);

Actual value of a: 246,239

Expected value of a: 246,240

Changing the type of 'a' to double and removing the conversion results in the correct value, but I'm writing a program which could result in very large numbers - am I incorrect for trying to use a long as the type?

Any advice is greatly appreciated!

Was it helpful?

Solution

You're seeing floating-point imprecision.

(720000 + 144000) * 0.285 actually computes to 246239.99999999997.
Reading that as a double will round up when printed, but casting to an integral type will always truncate.

Depending on your use case, you may want to use decimal or a BigDecimal implementation instead of double, or just round.

OTHER TIPS

Because I'm not allowed to comment I'm going to put an answer here, which really just builds on what SLacks said. If for whatever reason you find that you NEED to use long, there is a C# method Math.Round which will round a value, which would then give you the expected value. Of course rounding isn't always the best option.

http://msdn.microsoft.com/en-us/library/75ks3aby%28v=vs.110%29.aspx

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top