Question

I would like to implement (C) a communication producer/consumer mechanism based on shared memory. It replaces a stream socket communication between a client and a remote server. Nodes in the network are sharing a pool of memory to communicate to each others. Server can write data (produce) in a memory region and the client should read it (consume). My software actually uses a thread for reading (client side) and a thread for writing (server side). Threads resides on different machines (distributed).

What is the best and fast way to implement a mutual exclusion to access the shared memory region? (memory is external to both machines and just referred) The server should atomically produce data (write) if client is not reading; client should atomically consume data (read) if server is not writing.

It is clear I need a phthread mutex like mechanism. Threads are in this case waiting to be unlocked via local kernel interrupts. Would a phthread implementation also work on this distributed scenario (lock variable placed in shared memory - option PTHREAD_PROCESS_SHARED set)?

How can I differently implement a fast and reliable mutex which makes client thread and server thread access the shared region in turn, ensuring data consistency?

Was it helpful?

Solution

So the short answer is: you can use pthread mutex mechanism so long as pthreads knows about your particular system. Otherwise you'll need to look at the specific hardware/operating system for help.

This long answer is going to be somewhat general because the question does not provide a lot of details about the exact implementation of distributed shared memory that is being used. I will try to explain what is possible, but how to do it will be implementation-dependent.

As @Rod suggests, a producer-consumer system can be implemented with one or more mutex locks, and the question is how to implement a mutex.

A mutex can be considered an object with two states {LOCKED, UNLOCKED} and two atomic operations:

  • Lock: if state is LOCKED, block until UNLOCKED. Set state to LOCKED and return.
  • Unlock: set state to UNLOCKED and return.

Often mutexes are provided by the operating system kernel by implementing these operations on an abstract mutex object. For example, some variants of Unix implement mutexes and semaphores as operations on file descriptors. On those systems, pthreads would make use of the kernel facilities.

The advantage of this approach is that user-space programs don't have to care how it's implemented. The disadvantage is that each operations requires a call into the kernel and therefore it can be relatively slow compared to the next option:

A mutex can also be implemented as a memory location (let's say 1 byte long) that stores either the value 0 or 1 to indicate UNLOCKED and LOCKED. It can be accessed with standard memory read/write instructions. We can use the following (hypothetical) atomic operations to implement Lock and Unlock:

  1. Compare-and-set: if the memory location has the value 0, set it to the value 1, otherwise fail.
  2. Conditional-wait: block until the memory location has the value 0.
  3. Atomic write: set the memory location to the value 0.

Generally speaking, #1 and #3 are implemented using special CPU instructions and #2 requires some Kernel support. This is pretty much how How pthread_mutex_lock is implemented.

This approach provides a speed advantage because a kernel call is necessary only when the mutex is contended (someone else has the lock).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top