Question

I am fairly new to programming and have recently remade the microsoft "standard" calculator using C# (winforms) but wanted to ask a question regarding the variable type that microsoft use in their "standard calculator".

To determine the variable type used, I entered a calculation in the microsoft standard calculator:

88888888/9 (eight 8's).

The result was: 9876543.111111111 (9 decimal places)

I then created a simple C# console application that divides 2 numbers.

I first used the "int" type and as expected, the result was 9876543. I then used the "double" type and came quite close, the result being 9876543.11111111 (8 decimal places) I used decimal and the output (again, as expected) had far too many decimal places! I used float and the output was the same as that of the int type (9876543).

float a = 88888888F;
float b = 9F;
float c = 0F;
Console.WriteLine("a / b is: {0}", c = a / b);
Console.ReadLine();

From the above information, I would say that microsoft use type "double" as that came closest in terms of decimal places.

But.. I am not sure if I have done something wrong in my code for the floating point type. (Can you guys let me know if the above code is wrong? I intentionally type casted float to float, just in case, but originally didn't and still got same result in the end)

I have tried this link for some info on float types, but doesn't really help much: http://msdn.microsoft.com/en-gb/library/b1e65aza(v=vs.100).aspx

I then tried C++, and got similar results.

float a = 88888888;
float b = 9;
float c = a / b;
cout <<  c;

double aa = 88888888;
double bb = 9;
double cc = aa / bb;
cout << cc;

Forgive me if I am missing something insanely obvious! The best answer would be the one that tells me what variable type microsoft uses for their standard calculator and gives some form of evidence to back that up. Also, if I have made any error in my code above, any corrections would be appreciated.

Was it helpful?

Solution

The precision for a double is 15-16 digits. This is why, when you used a double, you only got 8 decimal places. Decimals are accurate to 28-29 significant figures which is why you got more decimal places than you wanted. If you want to match the Windows calculator and have 9 decimal places, you can use the decimal type and use Decimal.Round() to set it to 9 decimal places.

decimal a = 88888888;
decimal b = 9;
decimal c = 0;
Console.WriteLine("a / b is: {0}", Decimal.Round(c = a / b, 9) );
Console.ReadLine(); 

Edit -

The reason the Windows calculator shows a different degree of accuracy is because it uses an arbitrary-precision arithmetic library. From Wikipedia.

In Windows 95 and later, it uses an arbitrary-precision arithmetic library, replacing the standard IEEE floating point library. It offers bignum precision for basic operations (addition, subtraction, multiplication, division) and 32 digits of precision for advanced operations (square root, transcendental operators).

The C# compiler doesn't use the same arbitrary-precision library which likely explains the difference in results.

OTHER TIPS

If you look at MSDN:http://msdn.microsoft.com/en-us/library/b1e65aza.aspx. You would know float has only 7 digits precision, which in you case, is 9876543.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top