Question

I was writing a program which takes an input in dollars from the user and converts it to int which are the cents. The user will always enter either an int or a floating point number with a maximum of 2 decimal places. I want to convert it to int by multiplying by 100. However the program doesn't work for some of the numbers.

int cents = (dollars*100);

dollars is the floating point input that the user gives. For example, If I dollars = 4.2, cents becomes 419.999. How can I correct this problem?

Était-ce utile?

La solution 2

Simple adjust the value like this:

int cents = (int)(dollars*100 + 0.5);

Autres conseils

You can't correct it. Floating point numbers don't have "decimal places." They are always approximate and fuzzy. Don't ever use floating point numbers for money, ever -- this is one of the most important rules of software that deals with anything financial. Read the input in as a string (%s), and put together the cents by finding the decimal point in the string and then using atoi on the parts before and after the .

cents = (int)((dollars*100) + .5);

1) You can probably adjust your floating point precision model in your project properties
2) When converting from float to int it's always handy to add a bit to allow a round-down:

int cents = ((dollars*100.0)+0.5);
Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top