Can you avoid locking by guaranteeing that multiple threads won't access the same memory?

StackOverflow https://stackoverflow.com/questions/20548528

Вопрос

Say I have a large array and I want to process the contents with multiple threads. If I delegate each thread to a specific section, guaranteeing no overlap, does that eliminate any need for locking, assuming the threads don't access any other memory outside the array?

Something like this (pseudo-code):

global array[9000000];

do_something(chunk) {
    for (i = chunk.start; i < chunk.end; i++)
        //do something with array
}

main() {
    chunk1 = {start: 0, end: 5000000};
    chunk2 = {start: 5000000, end: 9000000};

    start_thread(thread1, do_something(chunk1));
    start_thread(thread2, do_something(chunk2));

    wait_for_join(thread1);
    wait_for_join(thread2);
    //do something else with the altered array
}
Это было полезно?

Решение

In a conforming C++11 compiler this is safe [intro.memory] (§1.7):

A memory location is either an object of scalar type or a maximal sequence of adjacent bit-fields all having non-zero width. [...] Two threads of execution (1.10) can update and access separate memory locations without interfering with each other.

C11 gives identical guarantees (they even use the same wording) in §3.14.

In a C++03 compiler this is not guaranteed to work by the standard, but it might still work if the compiler provides similar guarantees as an extension.

Другие советы

Yes: if you can guarantee that no two threads will access the same element, then there's no need for any further synchronisation.

There is only a conflict (and therefore a potential data race) if two threads access the same memory location (with at least one of them modifying it) without synchronisation.

(NOTE: this answer is based on the C++11 memory model. I've just noticed that you're also asking about a second language; I believe that C11 specifies a very similar memory model, but can't say for sure that the answer is also valid for C. For older versions of both languages, thread-safety was implementation-dependent.)

Yes, you can indeed.

TCMalloc is a good example.

Yes.

You do not even need to guarantee that no two threads access the same memory location. All you need to guarantee is that no single thread modifies any location that another one accesses (regardless whether that means reading or writing).

Given either no concurrent access at all or read-only concurrent access, you're good to go without locking.

Important performance issue !

Right, you doesn't need explicit locking, since, as said by others, no memory location is shared.

But you may trigger implicit hardware synchronization, since arbitrary chunks are likely to lead cache lines to be shared (not much with the figures used in your example, though). It is known as false sharing.

Higher level approach such as OpenMP resolves these kinds of issue :

  • chunks alignment (and threads numbers) are tuned according to underlying hardware.
  • it provides a better control over pool of threads, eg amortizing the cost of thread instantiations.
  • it's easier to write, and actually less intrusive.
Лицензировано под: CC-BY-SA с атрибуция
Не связан с StackOverflow
scroll top