Question

I was thinking of off-loading some math operations to the GPU. As I'm already using D3D11 I'd use a compute shader to do the work. But the thing is I need the results to be the same for the same input, no matter what GPU the user might have. (only requirement that it supports compute shader 4.0).

So is floating point math deterministic on GPUs?

If not, do GPUs support integer math?

Was it helpful?

Solution

I haven't used DirectCompute, only OpenCL.


GPUs definitely support integer math, both 32-bit and 64-bit integers. A couple of questions already have this discussion:

Basically, on modern GPUs 32-bit float and integer operations are equivalent in performance.


As for deterministic results, it depends on your code. For example, if you rely on multiple threads performing atomic operations on the same memory then reading that memory from other threads and performing operations depending on that value, then results may not be exactly the same every time.

From personal experience, I needed to generate random numbers but also required consistent results. So basically I had a largish array of seeds, one for each thread, and each one was completely independent. Other random number generators which rely on atomic operations and barriers would not have been.

The other half of having deterministic results is having the same result given different hardware. With integer operations you should be fairly safe. With floating point operations in OpenCL, avoiding the fast relaxed math option and the native variants of functions would increase chances of getting the same results on different hardware.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top