Question

We have a multiple threads application. In current implementation, thread1 is created at start up and periodically (every second or so, configurable) wakes up to check the disk for potential saved files. These files are saved by another thread, thread2. That running thread1 and its periodically wake up may slow down the application.

Now we have a chance to use boost::condition variable to put thread1 blocked until thread2 notifies it. By doing so, a flag needs to be created to avoid unnecessary notify from thread2 and that flag needs to be synchronized and be checked with high frequency (hundreds in few seconds) by thread2. Or thread1 will be notified each time when writing happens.

My questions here are the following:

  1. In the boost::condition implementation, the thread1 still needs to wake up frequently to check a flag and the difference is the implementation is hidden from us but it actually does that. Am I right? Similar API in Windows and Java do the same thing?

  2. What will happen if a thread is notified frequently many times even it is not in waiting state?

  3. In my case, it will improve the overall performance by switching to boost::condition implementation? My opinion is No.

Was it helpful?

Solution

  1. In POSIX and Win32 boost::condition is implemented using event based APIs. Technically thread doesn't wake up until it gets an event.
  2. If thread goes into wait after the signal has been sent - the signal would be lost. You should read about event based patterns and strategies for implementing "producer/consumer". Your file write/read example is classic producer/consumer instance. In order to avoid lost signal please implement it similar to the C++11 example in Wikipedia: http://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem#Example_in_C.2B.2B

The idea is that thread1 will always lock shared mutex if it doesn't wait for condition:

//thread1 - consumer
void thread1() {
    boost::scoped_lock lock(sharedMutex);
    // shared mutex locked, no events can be sent now
    while(1) {
        // check for files written by thread2
        sharedCond.wait( lock ); // this action unlocks the shared mutex, events can be sent now
    }
}

//thread2 - producer
void thread2() {
    boost::scoped_lock lock(sharedMutex); // will wait here until thread 1 starts waiting
    // write files
    sharedCond.notify_one();
}

3. Performance question: this change is not about the performance, but changing the polling to event model. If your thread1 was awake every 1 second, switching to event model won't improve CPU or I/O load (eliminate file verification every 1 second), until you run in embedded system where frequency is few KHz and I/O operation blocks the entire process. It will improve thread1 reaction time, in polling mode the max response time to file change would be 1 second, and after switching to event it would be immediate action. On the other hand thread2 performance might degrade in event model - before it didn't wait for anything, and if it uses condition - it will have to lock the shared mutex, that might be locked all time thread1 is reading the files.

OTHER TIPS

Checking a flag with high frequency is exactly what boost::condition allows you to avoid. thread1() just waits for the flag to be set:

#include <mutex>
#include <condition_variable>
#include <thread>

std::mutex mut;
bool flag;
std::condition_variable data_cond;

void thread2()
{
    //here, writing to a file
    std::lock_guard<std::mutex> lk(mut);
    flag = true;  //a new file is saved
    data_cond.notify_one();
}

void thread1()
{
    while(true)
    {
        std::unique_lock<std::mutex> lk(mut);
        data_cond.wait(lk,[]{return flag;});
        //here, processing the file
        flag = false;
        lk.unlock();
    }
}

This is a C++11 code based on Listing 4_1 in: C++ Concurrency in Action, Chapter 4 Synchronizing concurrent operations

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top