Question

It seems like there are many options of parallelizing Python. I have seen these options below:

shared memory: therading, multiprocessing, joblib, cython.parallel

distributed memory: mpi4py, parallelpython (pp)

any CUDA, OpenCL options?

Does anyone have experiences in using these or other parallel libraries? How do they compare to each other? I am particularly interested in the use of python in computation-intensive applications in scientific computing field.

Était-ce utile?

La solution 2

As far as I know, pyPar and/or pyMPI are the two most frequently used libraries for computation intensive applications in the scientific field.

pyPar tends to be easier to use, while pyMPI is more feature complete - so the first is used more frequently for less complex computations.

Iirc, they are just python wrappers for the relevant C libraries, making them the highest performing/most efficient ones to use.

Autres conseils

any CUDA, OpenCL options?

There is pyCUDA for CUDA anyway.

There is pyOpenCL also. (I'm less familiar with OpenCL, there may be others.)

There are pyCUDA and pyOpenCL tags here on SO.

pyCUDA and pyOpenCL are basically "wrappers" AFAIK, but it's unclear what exactly you're looking for -- your scope appears to be wide.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top