Question

According to Wikipedia, an "embarrassingly parallel" problem is one for which little or no effort is required to separate the problem into a number of parallel tasks. Raytracing is often cited as an example because each ray can, in principle, be processed in parallel.

Obviously, some problems are much harder to parallelize. Some may even be impossible. I'm wondering what terms are used and what the standard examples are for these harder cases.

Can I propose "Annoyingly Sequential" as a possible name?

Was it helpful?

Solution

Inherently sequential.

Example: The number of women will not reduce the length of pregnancy.

OTHER TIPS

There's more than one opposite of an "embarrassingly parallel" problem.

Perfectly sequential

One opposite is a non-parallelizable problem, that is, a problem for which no speedup may be achieved by utilizing more than one processor. Several suggestions were already posted, but I'd propose yet another name: a perfectly sequential problem.

Examples: I/O-bound problems, "calculate f1000000(x0)" type of problems, calculating certain cryptographic hash functions.

Communication-intensive

Another opposite is a parallelizable problem which requires a lot of parallel communication (a communication-intensive problem). An implementation of such a problem will scale properly only on a supercomputer with high-bandwidth, low-latency interconnect. Contrast this with embarrassingly parallel problems, implementations of which run efficiently even on systems with very poor interconnect (e.g. farms).

Notable example of a communication-intensive problem: solving A x = b where A is a large, dense matrix. As a matter of fact, an implementation of the problem is used to compile the TOP500 ranking. It's a good benchmark, as it emphasizes both the computational power of individual CPUs and the quality of interconnect (due to intensity of communication).

In more practical terms, any mathematical model which solves a system of partial differential equations on a regular grid using discrete time stepping (think: weather forecasting, in silico crash tests), is parallelizable by domain decomposition. That means, each CPU takes care of a part of the grid, and at the end of each time step the CPUs exchange their results on region boundaries with "neighbour" CPUs. These exchanges render this class of problems communication-intensive.

Im having a hard time to not post this... cause I know it don't add anything to the discussion.. but for all southpark fans out there

"Super serial!"

"Stubbornly serial"?

The opposite of embarassingly parallel is Amdahl's Law, which says that some tasks cannot be parallel, and that the minimum time a perfectly parallel task will require is dictated by the purely sequential portion of that task.

"standard examples" of sequential processes:

  • making a baby: “Crash programs fail because they are based on theory that, with nine women pregnant, you can get a baby a month.” -- attributed to Werner von Braun
  • calculating pi, e, sqrt(2), and other irrational numbers to millions of digits: most algorithms sequential
  • navigation: to get from point A to point Z, you must first go through some intermediate points B, C, D, etc.
  • Newton's method: you need each approximation in order to calculate the next, better approximation
  • challenge-response authentication
  • key strengthening
  • hash chain
  • Hashcash

P-complete (but that's not known for sure yet).

I use "Humiliatingly Sequential"

Paul

"Gladdengly Sequential"

It all has to do with data dependencies. Embarrassingly parallel problems are ones for which the solution is made up of many independent parts. Problems with the opposite of this nature would be ones that have massive data dependencies, where there is little to nothing that can be done in parallel. Degeneratively dependent?

The term I've heard most often is "tightly-coupled", in that each process must interact and communicate often in order to share intermediate data. Basically, each process depends on others to complete their computation.

For example, matrix processing often involves sharing boundary values at the edges of each array partition.

This is in contrast to embarassingly parallel (or loosely-coupled) problems where each part of the problem is completely self-contained, and no (or very little) IPC is needed. Think master/worker parallelism.

Boastfully sequential.

I've always preferred 'sadly sequential' ala the partition step in quicksort.

If ever one should speculate what it would be like to have natural, incorrigibly sequential problems, try

blissfully sequential

to counter 'embarrassingly parallel'.

"Completely serial?"

It shouldn't really surprise you that scientists think more about what can be done than what cannot be done. Especially in this case, where the alternative to parallelizing is doing everything as one normally would.

Completely non-parallelizable? Pessimally parallelizable?

The opposite is "disconcertingly serial".

taking into acount that parallelism is the act of doing many jobs in the same time step t. the opposite could be time-sequential problems

An example inherently sequential problem. This is common in CAD packages and some kinds of engineering analysis.

Tree traversal with data dependencies between nodes.

Imagine traversing a graph and adding up weights of nodes.

You just can't parallelise it.

CAD software represents parts as a tree, and to render to object you have to traverse the tree. For this reason, cad workstations use less cores and faster, rather than many cores.

Thanks for reading.

You could of course, however I think that both 'names' are a non-issue. From a functional programming perspective you could say that the 'annoyingly sequential' part is the smallest more or less independent part of an algorithm.

While the 'embarrassingly parallel' if not indeed taking into a parallel approach is bad coding practice.

Thus I don't see a point in given these things a name if good coding practice is always to brake up your solution into independent pieces, even if you at that moment don't take advantage of parallelism.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top