문제

Let's,

float dt;

I read dt from a text file as

inputFile >> dt;

Then I have a for loop as,

for (float time=dt; time<=maxTime; time+=dt)
{
    // some stuff
}

When dt=0.05 and I output std::cout << time << std::endl; I got,

0.05
0.10
...
7.00001
7.05001
...

So, why number of digits is increasing after a while?

도움이 되었습니까?

해결책

Because not every number can be represented by IEEE754 floating point values. At some point, you'll get a number that isn't quite representable and the computer will have to choose the nearest one.

If you enter 0.05 into Harald Schmidt's excellent online converter and reference the Wikipedia entry on IEEE754-1985, you'll end up with the following bits (my explanation of that follows):

   s eeeeeeee mmmmmmmmmmmmmmmmmmmmmmm
   0 01111010 10011001100110011001101
     |||||||| |||||||||||||||||||||||
128 -+||||||| ||||||||||||||||||||||+- 1 / 8388608
 64 --+|||||| |||||||||||||||||||||+-- 1 / 4194304
 32 ---+||||| ||||||||||||||||||||+--- 1 / 2097152
 16 ----+|||| |||||||||||||||||||+---- 1 / 1048576
  8 -----+||| ||||||||||||||||||+----- 1 /  524288
  4 ------+|| |||||||||||||||||+------ 1 /  262144
  2 -------+| ||||||||||||||||+------- 1 /  131072
  1 --------+ |||||||||||||||+-------- 1 /   65536
              ||||||||||||||+--------- 1 /   32768
              |||||||||||||+---------- 1 /   16384
              ||||||||||||+----------- 1 /    8192
              |||||||||||+------------ 1 /    4096
              ||||||||||+------------- 1 /    2048
              |||||||||+-------------- 1 /    1024
              ||||||||+--------------- 1 /     512
              |||||||+---------------- 1 /     256
              ||||||+----------------- 1 /     128
              |||||+------------------ 1 /      64
              ||||+------------------- 1 /      32
              |||+-------------------- 1 /      16
              ||+--------------------- 1 /       8
              |+---------------------- 1 /       4
              +----------------------- 1 /       2

The sign, being 0, is positive. The exponent is indicated by the one-bits mapping to the numbers on the left: 64+32+16+8+2 = 122 - 127 bias = -5, so the multiplier is 2-5 or 1/32. The 127 bias is to allow representation of very small numbers (as in close to zero rather that negative numbers with a large magnitude).

The mantissa is a little more complicated. For each one-bit, you accumulate the numbers down the right hand side (after adding an implicit 1). Hence you can calculate the number as the sum of {1, 1/2, 1/16, 1/32, 1/256, 1/512, 1/4096, 1/8192, 1/65536, 1/131072, 1/1048576, 1/2097152, 1/8388608}.

When you add all these up, you get 1.60000002384185791015625.

When you multiply that by the multiplier 1/32 (calculated previously from the exponent bits), you get 0.0500000001, so you can see that 0.05 is already not represented exactly. This bit pattern for the mantissa is actually the same as 0.1 but, with that, the exponent is -4 rather than -5, and it's why 0.1 + 0.1 + 0.1 is rarely equal to 0.3 (this appears to be a favourite interview question).

When you start adding them up, that small error will accumulate since, not only will you see an error in the 0.05 itself, errors may also be introduced at each stage of the accumulation - not all the the numbers 0.1, 0.15, 0.2 and so on can be represented exactly either.

Eventually, the errors will get large enough that they'll start showing up in the number if you use the default precision. You can put this off for a bit by choosing your own precision with something like:

#include <iostream>
#include <iomanip>
:
std::cout << std::setprecison (2) << time << '\n';

It won't fix the variable value, but it will give you some more breathing space before the errors become visible.

As an aside, some people recommend avoiding std::endl since it forces a flush of the buffers. If your implementation is behaving itself, this will happen for terminal devices when you send a newline anyway. And if you've redirected standard output to a non-terminal, you probably don't want flushing on every line. Not really relevant to your question and it probably won't make a real difference in the vast majority of cases, just a point I thought I'd bring up.

다른 팁

IEEE floats use the binary number system and therefore can't store decimal numbers exactly. When you add several of them together (sometimes just two is enough), the representational errors can accumulate and become visible.

Some numbers can't be precisely represented using floating points OR base 2 numbers. If I remember correcly, one of such numbers is decimal 0.05 (in base 2 results in infinitely repeating fractional number). Another issue is that if you print floating point to file (as base 10 number) then read it back you might as well get different number - because base differs and that might cause problems when converting fractional base2 to fractional base10 number.

If you want better precision you could try searching for a bignum library. This will work much slower than floating points, though. Another way to deal with precision problems would be to try storing numbers as "common fraction" with numberator/denominator(i.e. 1/10 instead of 0.1, 1/3 instead of 0.333.., etc - there's probably library even for that, but I haven't heard about it), but that won't work with irrational numbers like pi or e.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top