문제

I am using a GPU cluster without GPUDirect support. From this briefing, the following is done when transferring GPU data across nodes:

  1. GPU writes to pinned sysmem1
  2. CPU copies from sysmem1 to sysmem2
  3. Infiniband driver copies from sysmem2

Now I am not sure whether the second step is an implicit step when I transfer sysmem1 across Infiniband using MPI. By assuming this, my current programming model is something like this:

  1. cudaMemcpy(hostmem, devicemem, size, cudaMemcpyDeviceToHost).
  2. MPI_Send(hostmem,...)

Is my above assumption true and will my programming model work without causing communication issues?

도움이 되었습니까?

해결책

Yes, you can use CUDA and MPI independently (i.e. without GPUDirect), just as you describe.

  1. Move the data from device to host
  2. Transfer the data as you ordinarily would, using MPI

You might be interested in this presentation, which explains CUDA-aware MPI, and gives an example side-by-side on slide 11 of non-cuda MPI and CUDA-MPI

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top