Question

The CPU architecture landscape has changed, multiple cores is a trend that will change how we have to develop software. I've done multi-threaded development in C, C++ and Java, I've done multi-process development using various IPC mechanisms. Traditional approaches of using threads doesn't seem to make it easy, for the developer, to utilize hardware that supports a high degree of concurrency.

What languages, libraries and development techniques are you aware of that help alleviate the traditional challenges of creating concurrent applications? I'm obviously thinking of issues like deadlocks and race conditions. Design techniques, libraries, tools, etc. are also interesting that help actually take advantage of and ensure that the available resources are being utilized - just writing a safe, robust threaded application doesn't ensure that it's using all the available cores.

What I've seen so far is:

  • Erlang: process based, message passing IPC, the 'actor's model of concurrency
  • Dramatis: actors model library for Ruby and Python
  • Scala: functional programming language for the JVM with some added concurrency support
  • Clojure: functional programming language for the JVM with an actors library
  • Termite: a port of Erlang's process approach and message passing to Scheme

What else do you know about, what has worked for you and what do you think is interesting to watch?

Was it helpful?

Solution

I'd suggest two paradigm shifts:

Software Transactional Memory

You may want to take a look at the concept of Software Transactional Memory (STM). The idea is to use optimistic concurrency: any operation that runs in parallel to others try to complete its job in an isolated transaction; if at some point another transaction has been committed that invalidates data on which this transaction is working, the transaction's work is throwed away and the transaction run again.

I think the first widely known implementation of the idea (if not the proof-of-concept and first one) is the one in Haskell : Papers and presentations about transactional memory in Haskell. Many other implementations are listed on Wikipedia's STM article.

Event loops and promises

Another very different way of dealing with concurrency is implemented in the [E programming language](http://en.wikipedia.org/wiki/E_(programming_language%29).

Note that its way of dealing with concurrency, as well as other parts of the language design, is heavily based on the Actor model.

OTHER TIPS

You mentioned Java, but you only mention threads. Have you looked at Java's concurrent library? It comes bundled with Java 5 and above.

It's a very nice library containing ThreadPools, CopyOnWriteCollections to name a very few. Check out the documentation at the Java Tutorial. Or if you prefer, the Java docs.

I've used processing for Python. It mimicks the API of the threading module and is thus quite easy to use.

If you happen to use map/imap or a generator/list comprehension, converting your code to use processing is straightforward:

def do_something(x):
    return x**(x*x)

results = [do_something(n) for n in range(10000)]

can be parallelized with

import processing
pool = processing.Pool(processing.cpuCount())
results = pool.map(do_something, range(10000))

which will use however many processors you have to calculate the results. There are also lazy (Pool.imap) and asynchronous variants (Pool.map_async).

There is a queue class which implements Queue.Queue, and workers that are similar to threads.

Gotchas

processing is based on fork(), which has to be emulated on Windows. Objects are transferred via pickle/unpickle, so you have to make sure that this works. Forking a process that has acquired resources already might not be what you want (think database connections), but in general it works. It works so well that it has been added to Python 2.6 on the fast track (cf. PEP-317).

Intel's Threading Building Blocks for C++ looks very interesting to me. It offers a much higher level of abstraction than raw threads. O'Reilly has a very nice book if you like dead tree documentation. See, also, Any experiences with Intel’s Threading Building Blocks?.

I would say:

Models: threads + shared state, actors + message passing, transactional memory, map/reduce? Languages: Erlang, Io, Scala, Clojure, Reia Libraries: Retlang, Jetlang, Kilim, Cilk++, fork/join, MPI, Kamaelia, Terracotta

I maintain a concurrency link blog about stuff like this (Erlang, Scala, Java threading, actor model, etc) and put up a couple links a day:

http://concurrency.tumblr.com

I've been doing concurrent programming in Ada for nearly 20 years now.

The language itself (not some tacked on library) supports threading ("tasks"), multiple scheduling models, and multiple synchronization paradigms. You can even build your own synchronization schemes using the built in primitives.

You can think of Ada's rendezvous as sort of a procedural-oriented synchronization facility, while protected objects are more object-oriented. Rendezvous are similar to the old CS-concept of monitors, but much more powerful. Protected objects are special types with synchronization primitives that allow you to build things exactly like OS locks, semaphores, events, etc. However, it is powerful enough that you can also invent and create your own kinds of sync objects, depending on your exact needs.

The question What parallel programming model do you recommend today to take advantage of the manycore processors of tomorrow? has already been asked. I gave the following answer there too.

Kamaelia is a python framework for building applications with lots of communicating processes.

Kamaelia - Concurrency made useful, fun

In Kamaelia you build systems from simple components that talk to each other. This speeds development, massively aids maintenance and also means you build naturally concurrent software. It's intended to be accessible by any developer, including novices. It also makes it fun :)

What sort of systems? Network servers, clients, desktop applications, pygame based games, transcode systems and pipelines, digital TV systems, spam eradicators, teaching tools, and a fair amount more :)

Here's a video from Pycon 2009. It starts by comparing Kamaelia to Twisted and Parallel Python and then gives a hands on demonstration of Kamaelia.

Easy Concurrency with Kamaelia - Part 1 (59:08)
Easy Concurrency with Kamaelia - Part 2 (18:15)

I am keeping a close eye on Parallel Extensions for .NET and Parallel LINQ.

I know of Reia - a language that is based on Erlang but looks more like Python/Ruby.

OpenMP.

It handles threads for you so you only worry about which parts of your C++ application you want to run in parallel.

eg.

#pragma omp parallel for
for (int i=0; i < SIZE; i++) 
{
// do something with an element
}

the above code will run the for loop on as many threads as you've told the openmp runtime to use, so if SIZE is 100, and you have a quad-core box, that for loop will run 25 items on each core.

There are a few other parallel extensions for various languages, but the ones I'm most interested in are the ones that run on your graphics card. That's real parallel processing :) (examples: GPU++ and libSh)

C++0x will provide std::lock functions for locking more than one mutex together. This will help alleviate deadlock due to out-of-order locking. Also, the C++0x thread library will have promises, futures and packaged tasks, which allow a thread to wait for the result of an operation performed on another thread without any user-level locks.

multiprocessing is a python library that simplifies multi-core programming, as mentionned in another answer.

Program written with python's multiprocessing can easily be modified to ship work on the cloud, instead of to local cores. piCloud takes advantage of that to provide large, on-demand processing power on the cloud: you just need to modify 2 lines of your code.

So, here is the take-away: when selecting a library for multi-core, one may want to ask whether a cloud approach would also make sense.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top