Question

I'm trying to compile some code with the CUDA SDK 5.5 RC and g++ 4.7 on MacOS X 10.8. If I understand correctly CUDA 5.5 should work with g++ 4.7. Looking at /usr/local/cuda/include/host_config.h it should even work with g++ 4.8.

Concerning g++ 4.8: I tried to compile the following program:

// example.cu
#include <stdio.h>
int main(int argc, char** argv) {
  printf("Hello World!\n");
  return 0;
}

But it fails:

$ nvcc example.cu -ccbin=g++-4.8
/usr/local/Cellar/gcc48/4.8.1/gcc/include/c++/4.8.1/cstdlib(178): error: identifier "__int128" is undefined
/usr/local/Cellar/gcc48/4.8.1/gcc/include/c++/4.8.1/cstdlib(179): error: identifier "__int128" is undefined
2 errors detected in the compilation of "/tmp/tmpxft_00007af2_00000000-6_example.cpp1.ii".

The same program compiles and runs with g++ 4.7:

$ nvcc example.cu -ccbin=g++-4.7
$ ./a.out 
Hello World!

But if I include <limits>...

// example_limits.cu
#include <stdio.h>
#include <limits>
int main(int argc, char** argv) {
  printf("Hello World!\n");
  return 0;
}

... even g++ 4.7 fails. The build log is located here: https://gist.github.com/lysannschlegel/6121347
There you can find also a few other errors, I'm not totally sure if they are all related to __int128 missing.
It could well be that other standard library includes break the build on g++ 4.7 as well, limits is the one I tripped over.

I also tried g++ 4.5 because I happen to have it on my machine as well (you can never have too many compiler versions, can you?), and it works.

Can I expect that this will be fixed in the release of CUDA 5.5? (I hope NVIDIA doesn't simply go back to supporting gcc only up to version 4.6.)
Is there a way to work around this in the meantime?

UPDATE:

As @talonmies points out below, this is not strictly a bug in CUDA 5.5 on MacOS as gcc is not officially supported on MacOS. As many third-party libraries don't properly handle the supported toolchains, clang or llvm-gcc (llvm-gcc being from 2007....), there is still a need to make the gcc work. gcc up to 4.6 should work fine (I tested 4.5 only).
You can make gcc 4.7 work using the trick pointed out by @BenC in the comments:

$ cat compatibility.h 
#undef _GLIBCXX_ATOMIC_BUILTINS
#undef _GLIBCXX_USE_INT128

$ nvcc example_limits.cu -ccbin=g++-4.7 --pre-include compatibility.h

nvcc with gcc 4.8 still chokes on __int128 in cstdlib. I guess cstdlib is included before --pre-include files are included.

Was it helpful?

Solution

You need to read the MacOS getting started guide more closely:

To use CUDA on your system, you will need the following installed:

CUDA-capable GPU

‣ Mac OSX v. 10.7.5 or later

‣ The gcc or Clang compiler and toolchain installed using Xcode

‣ NVIDIA CUDA Toolkit (available at http://developer.nvidia.com/cuda-downloads)

That means precisely what it says - use the compiler(s) that ships with Xcode. Don't use a self-built gcc version because it isn't guaranteed to work, even if that compiler version is listed as being supported on other platforms and if trivial code appears to compile correctly.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top