Question

There has been a significant shift towards data-parallel programming via systems like OpenCL and CUDA over the last few years, and yet books published even within the last six months never even mention the topic of data-parallel programming.

It's not suitable for every problem, but it seems that there is a significant gap here that isn't being addressed.

Was it helpful?

Solution

First off, I'll point out that concurrent programming is not necessarily synonymous with parallel programming. Concurrent programming is about constructing applications from loosely-coupled tasks. For instance, a dialog window could have interactions with each control implemented as a separate task. Parallel programming, on the other hand, is explicitly about spreading the solution of some computational task across more than a single piece of execution hardware, essentially always for performance reasons of some sort (note: even too little RAM is a performance reason when the alternative is swapping.

So, I have to ask in return: What books are you referring to? Are they about concurrent programming (I have a few of these, there's a lot of interesting theory there), or about parallel programming?

If they really are about parallel programming, I'll make a few observations:

  • CUDA is a rapidly moving target, and has been since its release. A book written about it today would be half-obsolete by the time it made it into print.
  • OpenCL's standard was released just under a year ago. Stable implementations came out over the last 8 months or so. There's simply not been enough time to get a book written yet, let alone revised and published.
  • OpenMP is covered in at least a few of the parallel programming textbooks that I've used. Up to version 2 (v3 was just released), it was essentially all about data parallel programming.

OTHER TIPS

I think those working with parallel computing academically today are usually coming from the cluster computing field. OpenCL and CUDA use graphics processors, which more or less inadvertently have evolved into general purpose processors along with the development of more advanced graphics rendering algorithms.

However, the graphics people and the high performance computing people have been "discovering" each other for some time now, and a lot or research is being does into using GPUs for general purpose computing.

"always" is a bit strong; there are resources out there (example) that include data parallelism topics.

The classic book "The Connection Machine" by Hillis was all data parallelism. It's one of my favorites

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top