Question

I recently learned that compilers will optimize your code by rearranging instructions, and that this can be controlled by using barriers.

IIRC, locking a mutex makes a barrier, and unlocking a mutex also makes a barrier, to keep code inside the critical section from getting out.

So pthread_mutex_lock and pthread_mutex_unlock must implicitly be these "barriers". What if I have a class like this, which wraps my mutex?

class IMutex {
public:
    virtual void lock() = 0;
    virtual void unlock() = 0;
};

it seems to me, the compiler won't know that I'm calling pthread_mutex_lock() inside lock(), and pthread_mutex_unlock() inside unlock(), because it's all virtual'd away.

Will this lead to bugs? Do I need to manually specify barriers somehow?

Was it helpful?

Solution

Reordering instructions is done on various levels. The most obvious one is the compiler and a less obvious one is the CPU (which is on the fly). However synchronization functions are almost always a fence, which prevent that instructions before and after the fence are reordered.

So if your virtual lock calls pthread_mutex_*() then your virtual functions contains a fence.

So the short answer is: No, it will not lead to bugs.

There is also the volatile keyword, which depending on the platform can also generate a fence. However using the volatile keywords makes it a whole lot harder to detect those fences since every time you use a function or variable that is volatile you introduce a fence. So the advice is that you use the synchronization functions of the platform.

The only time you need to be aware of fences is when you are not using concurrency objects to perform synchronization (like using a bool instead of a mutex).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top