Instead of answering your questions directly, first, here's a description of the typical way to do this:
You have a sort of queue or list, where you add your work data on to. Whenever you add a set of work data, you first lock a mutex, add the data, signal your condition variable, then unlock the mutex.
Your worker threads then lock the mutex, and wait for the condition in a loop, while the queue is empty. When the signal is sent, one or more workers will wake up, but only one (at a time) will grab the mutex. With the mutex locked, the "winner" checks if something is in the queue, extracts it, unlocks the mutex, and does the necessary work. After unlocking the mutex other threads may wake up also (and will, if the condition was broadcast), and will either extract the next piece of work from the queue, or go back to waiting if the queue is empty.
In code, it looks a bit like this:
#include <pthread.h>
#include <unistd.h>
#include <stdio.h>
#define WORKER_COUNT 3
pthread_mutex_t mutex;
pthread_cond_t cond;
pthread_t workers[WORKER_COUNT];
static int queueSize = 0;
static void *workerFunc(void *arg)
{
printf("Starting worker %d\n", (int)arg);
while(1) {
pthread_mutex_lock(&mutex);
while(queueSize < 1) {
pthread_cond_wait(&cond, &mutex);
}
printf("Worker %d woke up, processing queue #%d\n", (int)arg, queueSize);
//Extract work from queue
--queueSize;
pthread_mutex_unlock(&mutex);
//Do work
sleep(1);
}
}
int main()
{
int i;
pthread_mutex_init(&mutex, 0);
pthread_cond_init(&cond, 0);
for(i=0; i<WORKER_COUNT; ++i) {
pthread_create(&(workers[i]), 0, workerFunc, (void*)(i+1));
}
sleep(1);
pthread_mutex_lock(&mutex);
//Add work to queue
queueSize = 5;
pthread_cond_broadcast(&cond);
pthread_mutex_unlock(&mutex);
sleep(10);
return 0;
}
(I've left out cleaning up after the threads, and the passing of the worker number to the thread is quick and dirty, but works in this case).
Here, the workers will be woken up by the pthread_cond_broadcast()
, and will run as long as there's something in the queue (until queueSize
is back 0 - imagine that there's an actual queue also), then go back to waiting.
Back to the questions:
1: The mutex and the guard variable (here it's queueSize
) takes care of this. You also need the guard variable as your thread may be woken up due to other causes also (so-called spurious wakeups, see http://linux.die.net/man/3/pthread_cond_wait).
2: The woken threads contend over the mutex just as any other threads would do, if you call pthread_mutex_lock()
.
3: I'm not sure why you'd need to signal the amount of available worker threads back to the producer?
4: The queue needs to be accessible from both your producer and consumer - but can still be encapsulated with functions (or classes if you're using C++) in various ways.
5: I hope the above is enough?
6: The thing with pthread_cond_wait()
is that it can have spurious wakeups. That is, it might wake up even though you did not signal the condition. You therefore need a guard variable (the while()
loop around the pthread_cond_wait()
in my code example), to make sure that there actually is a reason to wake up, once pthread_cond_wait()
returns. You then protect the guard variable (and whatever work data you need to extract) with the same mutex as the condition uses, and then you can be certain that only one thread will do each piece of work.
7: Instead of having the producer go to sleep, I'd just let it add whatever data it can extract to the workqueue. If the queue is full, then it should go to sleep, otherwise it should just keep on adding stuff.
8: With your Listener thread, I can't really see why you even need your Producer thread. Why not let the Workers call extract_element()
themselves?
9: You need to protect all accesses to the list variables. That is, in insertion()
, lock the mutex just before you first access front
, and unlock it after your last access of rear
. Same thing in extract_element()
- although you'll need to rewrite the function to also have a valid return value when the queue is empty.