문제

I'm debugging some old C code and it has a definition #define PI 3.14... where ... is about 50 other digits.

Why is this? I said I could reduce the number to about 16 decimal places but my boss snarled at me saying that the other numbers are there for platform independence and forward compatibility. But will is slow the program down?

도움이 되었습니까?

해결책

No, this will not slow down the program, unless you are running on an incredibly underpowered 1MHz DSP chip that has to do floating point arithmetic in software as opposed to passing it off to a dedicated FPU. This would mean that any mathematical operations that use floating point data are much slower than just using integer arithmetic.

In general, greater precision is only going to introduce a slowdown if the most time-consuming part of your program is doing a lot of calculations in rapid succession, and floating point calculations are especially slow. On a modern CPU, this is generally not the case, with the possible exception of certain chips that cause an 80-cycle stall on things like floating point underflow. That kind of issue likely exceeds the domain of this question.

First, it's better to use a common standard definition of PI, like in the C standard header, <math.h>, where it is defined as #define M_PI 3.14159265358979323846. If you insist, you can go ahead and define it manually.

Also, the best precision currently available in C is the equivalent of about 19 digits.

According to Wikipedia, 80-bit "Intel" IEEE 754 extended-precision long double, which is 80 bits padded to 16 bytes in memory, has 64 bits mantissa, with no implicit bit, which gets you 19.26 decimal digits. This has been the almost universal standard for long double for ages, but recently things have started to change.

The newer 128-bit quad-precision format has 112 mantissa bits plus an implicit bit, which gets you 34 decimal digits. GCC implements this as the __float128 type and there is (if memory serves) a compiler option to set long double to it.

Personally, if I were required to use our own definition of pi, I'd write something like this:

#ifndef M_PI
#define PI 3.14159265358979323846264338327950288419716939937510
#else
#define PI M_PI
#endif

If the latest C standard supports an even wider floating point primitive data type, it's pretty much a guarantee that constants in the math library would be updated to support this.

References

  1. More Precise Floating point Data Types than double?, Accessed 2014-03-13, <https://stackoverflow.com/questions/15659668/more-precise-floating-point-data-types-than-double>
  2. Math constant PI value in C, Accessed 2014-03-13, <https://stackoverflow.com/questions/9912151/math-constant-pi-value-in-c>

다른 팁

The number of digits in a macro definition almost certainly will have no effect at all on run-time performance.

Macro expansion is textual. That means that if you have:

#define PI 3.14159... /* 50 digits */

then any time you refer to PI in code to which that definition is visible, it will be as if you had written out 3.14159....

C has just three floating-point types: float, double, and long double. There sizes and precisions are implementation-defined, but they're typically 32 bits, 64 bits, and something wider than 64 bits (the size of long double typically varies more from system to system than the other two do.)

If you use PI in an expression, it will be evaluated as a value of some specific type. And in fact, if there's no L suffix on the literal, it will be of type double.

So if you write:

double x = PI / 2.0;

it's as if you had written:

double x = 3.14159... / 2.0;

The compiler will probably evaluate the division at compile time generating a value of type double. Any extra precision in the literal will be discarded.

To see this, you can try writing a small program that uses the PI macro and examining an assembly listing.

For example:

#include <stdio.h>

#define PI 3.141592653589793238462643383279502884198716939937510582097164

int main(void) {
    double x = PI;
    printf("x = %g\n", x);
}

On my x86_64 system, the generated machine code has no reference to the full precision value. The instruction corresponding to the initialization is:

movabsq $4614256656552045848, %rax

where 4614256656552045848 is a 64-bit integer corresponding to the binary IEEE double-precision representation of a number as close as possible to 3.141592653589793238462643383279502884198716939937510582097164.

The actual stored floating-point value on my system happens to be exactly:

3.1415926535897931159979634685441851615905761718750000000000000000

of which only about 16 decimal digits are significant.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top