Question

I would like to compile parallel.cu and python_wrapper.cpp where python_wrapper.cpp use Boost.python to expose the method in parallel.cu to python.
I'm new to both cuda and Boost.python.
From their manual and google, I couldn't find how to make them talk.
Some site says, that I should do something like

nvcc -o parallel.cu
g++ -o python_wrapper.cpp
g++ parallel.o python_wrapper.o

But the only way I know to compile a Boost.python code is to use bjam.
There have been attempt to integrate nvcc into bjam, but I couldn't make them work.

parallel.cuh

__global__ void parallel_work();
int do_parallel_work();

python_wrapper.cpp

#include <boost/python/module.hpp>
#include <boost/python/def.hpp>
#include "parallel.cuh"

BOOST_PYTHON_MODULE(parallel_ext){
    using namespace boost::python;
    def("parallel", do_parallel_work);
}

How can I compile these files?
I have heard of PyCuda, but I need to include Boost and thrust library in my .cu files.
Also, if possible, I would like to stick to a standard command line driven compilation process.

Was it helpful?

Solution

Create a static or dynamic library with the CUDA functions and link it in. That is, use nvcc to create the library and then, in a separate step, use g++ to create the Python module and link in the library.

OTHER TIPS

To integrate code compiled with nvcc and code compiled with g++, I just defined a new compile rule for cuda sources, stored in .cu files to .o

Rules to compile .cu to .o are stored in a nvcc.jam file which I import from my Jamroot.

Below is my nvcc.jam file

import type ;
type.register CUDA : cu ;

import generators ;
generators.register-standard nvcc.compile : CUDA : OBJ ;

actions compile
{
    "/usr/local/cuda/bin/nvcc" -gencode=arch=compute_10,code=\"sm_10,compute_10\"  -gencode=arch=compute_20,code=\"sm_20,compute_20\" -gencode=arch=compute_30,code=\"sm_30,compute_30\"  -m64 --compiler-options -fno-strict-aliasing  -I. -I/usr/local/cuda/include -I/home/user/GPU/SDK/C/common/inc -I/home/user/GPU/SDK/shared/inc -DUNIX -O2   -o $(<) -c $(>)
}

Obviously it's a bit of a hack as cuda installation path are hardcoded, but it works fine for my need. I would love having equivalent (hopefully cleaner) bjam extension distributed from NVidia SDK.

In the main project file I can then define compilation rules using both .cpp and .cu files like:

exe testdraw
:
    gpu/drawable.cu
    gpu/testdraw.cpp
    gpu/cudacommon.cu
    gpu/host.cpp
    gpu/opencl.cpp
    gpu/opencl24.cpp

    png
    z
    cl
    libboost_program_options

    cuda
    cudart
    cublas
:
;

cuda, cudart and cublas being the usual cuda libs declared as usual.

lib cudart : : <name>cudart ;
lib cuda : : <name>cuda ;
lib cublas : : <name>cublas ;
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top