What is the definition for gamma(double x) and why is it different on two gcc versions?

StackOverflow https://stackoverflow.com/questions/18116376

  •  23-06-2022
  •  | 
  •  

Question

Through unfortunate circumstances I have discovered that my standard library implementations <math.h> and <cmath>(C++) apparently contain a definition for a function with a prototype like:

double gamma(double x);

Though I do not see it listed anywhere in the language standard (draft I have access to).

Using gcc v4.2.1 on Mac OS X that function evaluates the same as tgamma which is actually the name given to it in the standard. (reference)

But on gcc v4.6.3 on Ubuntu 12.04 that function evaluates to something different.

I don't understand why a function named gamma compiles at all, but why is the result not the same between compilers?

Here is a simple test program:

#include<iostream>
#include<sstream>
#include<math.h> // or <cmath>

int main(int argc, char **argv)
{
    std::stringstream conversionStream(argv[1]);
    double input;
    conversionStream >> input;
    std::cout << "gamma( " << input << " ) = " << gamma(input) << std::endl;
    return 0;
}

Compile and run with 1 argument:

$ g++ -o gamma_test gamma_test.cpp 
$ ./gamma_test 1.0
gamma( 1 ) = 1

But the output on Ubuntu gcc v4.6.3 is 0!

Was it helpful?

Solution

Summary: Historical confusion abounds; avoid gamma() and use tgamma().

It's the math library, not gcc (the compiler), that implements these functions. If you're seeing different behavior on MacOS and Ubuntu, it's probably because Ubuntu uses glibc and MacOS uses something else.

There is no function named gamma in the ISO C standard library.

There are standard functions called lgamma and tgamma. Quoting N1570 (the latest draft of the 2011 ISO C standard) sections 17.12.8.3 and 17.12.8.4:

#include <math.h>
double lgamma(double x);
float lgammaf(float x);
long double lgammal(long double x);

The lgamma functions compute the natural logarithm of the absolute value of gamma of x. A range error occurs if x is too large. A pole error may occur if x is a negative integer or zero.

#include <math.h>
double tgamma(double x);
float tgammaf(float x);
long double tgammal(long double x);

The tgamma functions compute the gamma function of x. A domain error or pole error may occur if x is a negative integer or zero. A range error occurs if the magnitude of x is too large and may occur if the magnitude of x is too small.

These functions do not appear in the 1990 ISO C standard. They were introduced by C99.

Quoting the Linux man page for gamma:

These functions are deprecated: instead, use either the tgamma(3) or the lgamma(3) functions, as appropriate.

For the definition of the Gamma function, see tgamma(3).

  • *BSD version

    The libm in 4.4BSD and some versions of FreeBSD had a gamma() function that computes the Gamma function, as one would expect.

  • glibc version

    Glibc has a gamma() function that is equivalent to lgamma(3) and computes the natural logarithm of the Gamma function.

and an historical note:

4.2BSD had a gamma() that computed ln(|Gamma(|x|)|), leaving the sign of Gamma(|x|) in the external integer signgam. In 4.3BSD the name was changed to lgamma(3), and the man page promises

"At some time in the future the name gamma will be rehabilitated and used for the Gamma function"

This did indeed happen in 4.4BSD, where gamma() computes the Gamma function (with no effect on signgam). However, this came too late, and we now have tgamma(3), the "true gamma" function.

Since gamma is not a standard C function, compiling with gcc -std=c99 -pedantic or gcc -std=c11 -pedantic should produce at least a warning for any attempt to call it.

You should probably use tgamma() (or lgamma() if you want the natural logarithm) and avoid using gamma().

The C standard doesn't seem to say what the Gamma function is. The Linux tgamma() man page does (but if you're trying to use it you probably already know what it is):

The Gamma function is defined by

Gamma(x) = integral from 0 to infinity of t^(x-1) e^-t dt

It is defined for every real number except for nonpositive integers.
For nonnegative integral m one has

Gamma(m+1) = m!

and, more generally, for all x:

Gamma(x+1) = x * Gamma(x)

Furthermore, the following is valid for all values of x outside the poles:

Gamma(x) * Gamma(1 - x) = PI / sin(PI * x)

OTHER TIPS

On some platforms (linux and bsd pre-4.2 or so), gamma is lgamma (the log of the gamma function). On other platforms (osx and bsd post-4.4ish), it is tgamma (the "true" gamma function). This is a historical curiosity.

There's a note about this in the OS X manpages:

gamma() and gamma_r() are deprecated, and should not be used. The tgamma() function should be used instead. Note, however, that on some platforms, gamma() and gamma_r() historically computed the log of the Gamma function, instead of the Gamma function itself. When porting code from such platforms, it will be necessary to use lgamma() or lgamma_r() instead.

Long story short, just don't use gamma. It's non-standard anyway. Use the standard functions tgamma and lgamma instead (or maybe the posix extension lgamma_r, which is re-entrant).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top