Question

I am reading Thread Synchronization from the book Advance Programming in unix environment.

In this section, there is a example to use mutex with dynamically allocated object. I have some doubts in the same.

Here I am sharing a timeline of events (top to down) happening to explain my doubts:

  1. Thread1 created.
  2. Thread1 create a mutex var initialize it and put it on a global list so that other can use it.
  3. Now Thread1 acquired a lock to use a shared data Structure say ds. Thread1 need to do very large amount of work with ds, i.e. Thread1 is going to acquire this lock for a long time.
  4. Now while Thread1 still acquired the lock, Thread2 is created.
  5. Now Thread2 also want to use ds.
  6. So Thread2 has to first increment the counter showing the reference to ds is increased. To do so (according to the book) it first need to acquire a lock using the same mutex_t variable before incrementing the count.
  7. But as Thread1 already acquired a lock on this mutex_t variable, so when Thread2 call lock() before incrementig the count, it will have to wait till Thread1 unlock the lock.

Doubts:

  1. About which global list he is talking about (means just to make any list and pass reference of it to all the threads or any specific list)?
  2. When Thread1 created the lock variable it set the count to 1. Then Thread2 is waiting to increment this count to 2. But suppose a situation in which after doing the current work Thread1 didn't need to use the ds. So before unlocking it also decrease the count or first unlock it then call foo_rele() to again lock and decrease the count. Now is it possible that before Thread2 increment the count, Thread1 decrements it. If yes (according to me) then my data structure will be destroyed? So I think there is a slight error in this example of the book. It would be better if we use different mutex_var to increment the count?
Was it helpful?

Solution

A. I think that under the term "global list" author understands all variables that are shared between threads.

Example:

struct foo* shared_foo; /* This pointer is shared between all threads */

struct foo* foo_alloc(void)
{
   /* This pointer is local to the thread which allocates the memory */
   struct foo *fp;

    if ((fp = malloc(sizeof(struct foo))) != NULL) {
        /* whatever */
    }
    /* local pointer value returned */
    return(fp);
}

/* probably somewhere in the code the shared pointer (on the 'global list') is initialized this way */
shared_foo = foo_alloc();

B. Hmm... I don't really undestand what you say. Could you please write your scenario as a list? In my opinion f_count is set during initialization as a flag 'This mutex is in use'. So when the mutex is free the f_count value is set to 1. When the Thread1 acquires the lock it's value is set to 2. When it releases the lock the value is set back to 1. Valid f_count values are: 1 (initalized and free) and 2 (initialized and busy). In order to release the mutex you simply have to call two times foo_rele when it's taken (f_count = 2) or once when it's free (f_count = 1). Then the f_count value reaches 0 and the mutex is removed.

OTHER TIPS

Right, so example 11.10 is all about the content WITHIN the structure foo. Each structure has a lock, so for thread 1 to operate on the object, thread 1 needs to hold the mutex within the object.

The example given is incomplete, and I can understand your confusion. foo_rele is not supposed to be called until the thread no longer wants that object. If another thread wants to use foo, it is supposed to call foo_hold() to increment the reference count (fp->count++). And yes, there is a race condition where thread 2 may WANT to get it, and thread 1 is releasing it.

This is definitely not unusual for multithreaed programming - a thread may well delete what you want to work in in your thread, unless the code is specifically written to avoid that. The avoidance would for example include a lock for the list of objects, and if my thread holds the list-lock, the other thread(s) can't add or remove things from the list (and shouldn't be searching the list, as I may be just adding or removing something, and there is no guarantee that the list is consistent).

I hope this helps.

  1. Threads share memory, so any global variable is visible for all the threads within the same process (passing the pointer to each thread is therefore unnecessary).

  2. When using mutexes, you must count on the fact, that the threads may lock the mutex in any order (POSIX does not guarantee any specific order). So it is entirely possible, that the Thread 1 creates, uses and destroys the structure before any other thread acquires the access to the mutex.

P.S. I understand your doubts. What I'm really missing in the code snippet is the other mutex actually preventing simultaneous write to the structure.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top