Question

I have data that I'm trying to accurately and precisely manipulate with an ever-increasing denominator.

Please assume that the numerator will always have a decimal.

I see in the docs that divide(BigDecimal divisor) will actually reduce the scale which seems strange since as I understand the usage of "scale", number of digits past the decimal point, it should increase upon division.

I also see in the docs that multiply(BigDecimal multiplicand) increases the scale. This also doesn't make sense, according to my understanding of scale, since the likelihood of two multiplied numbers needing digits beyond the decimal point goes down.

Are these typos in the docs?

If not, is my understanding of scale incorrect?

If not, how can precision be maintained with an ever-increasing denominator that increases the number of digits past the decimal point?

Was it helpful?

Solution

This is effectively just scientific notation. As it says in the docs, the value of a BigDecimal is:

unscaledValue × 10^(-scale)

Thus multiplying two BigDecimals is equivalent to multiplying their unscaledValues and adding their scales:

   a * b
== (uA * 10^-sA) * (uB * 10^-sB)
== (uA * uB) * 10^-(sA + sB)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top