문제

I've been working on a physics simulations requiring the generation of a large amount of random numbers (at least 10^13 if you want an idea). I've been using the C++11 implementation of the Mersenne twister. I've also read that GPU implementation of this same algorithm are now a part of Cuda libraries and that GPU can be extremely efficient at this task; but I couldn't find explicit numbers or a benchmark comparison. For example compared to an 8 cores i7, are Nvidia cards of the last generations more performant in generating random numbers? If yes, how much and in which price range?

I'm thinking that my simulation could gain from having a GPU generating a huge pile of random numbers and the CPU doing the rest.

도움이 되었습니까?

해결책

Some comparisons can be found here: https://developer.nvidia.com/cuRAND

다른 팁

If you have a new enough Intel CPU (IvyBridge or newer), you can use the RDRAND instruction.

This can be used via the _rdrand16_step(), _rdrand32_step() and _rdrand64_step() intrinsic functions.

Available via VS2012/13, Intel compiler and gcc.

The generated random number is originally seeded on a real random number. Designed for NIST SP 800-90A compliance, its randomness is very high.

Some numbers for reference:

On an IvyBridge dual core laptop with HT (2.3GHz), 2^32 (4 Gigs) random 32bit numbers took 5.7 seconds for single thread and 1.7 seconds with OpenMP.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top