Question

On the Nvidia GPU, we can have multiple kernels running concurrently by using the Streams. How about the Xeon Phi? If I offload two part of computation code by different threads, will they run concurrently on the Xeon Phi?

Was it helpful?

Solution

Yes you can have concurrent offload executions on the Xeon Phi, up to 64 by default.

See the --max-connections parameter of the Coprocessor Offload Infrastructure (COI) daemon running on the Xeon Phi /bin/coi_daemon :

  --max-connections=<int>  The maximum number of connections we allow from host
                           processes. If this is exceeded, new connections
                           are temporarily blocked. Defaults to 64.
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top