Question

I have a financial application that deals with "percentages" quite frequently. We are using "decimal" types to avoid rounding errors. My question is:

If I have a quantity representing 76%, is it better to store it as 0.76 or 76?

Furthermore, what are the advantages and/or disadvantages from a code maintainability point of view? Are there conventions for this?

Était-ce utile?

La solution

If percentage is only a small part of the problem domain, I would probably stick with a primitive number.

PerCent literally means per hundred, and is a decimal number; e.g. 76 % is equal to 0.76. Thus, in the absence of units of measure support in the programming language itself, I would represent a percentage as a decimal number.

This also means that you don't need to perform special arithmetic in order to calculate with percentages, but you will need to multiply by 100 if you need to display the number in percent.

If percentage is a very central part of the problem domain, you should consider discarding Primitive Obsession and instead introduce a proper Value Object.

Still, even if you introduce a Percent Value Object that contains conversions to and from primitive numbers, you're still stuck with potential programmer errors, because in itself, the number 0.9 could both mean 90 % or 0.9 % depending on how you choose to interpret it.

In the end, my best advice is to cover your code base with appropriate unit tests, so that you lock the conversion code down.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top