Locking operations will eventually use compiler built-in functions which perform some type of atomic operation. The compiler knows that those operations must not be reodered and will not optimize "past" them. It's all fine.
how does compiler know not to optimize statements from inside lock/unlock? using boost::spinlocks in c++
-
10-12-2021 - |
Question
Class with two overloaded operator()
functions are called from separate threads. See //comments in code below.
Does the optimizer know not to move entryadd = mSpread * ENTRY_MULTIPLIER
above the lock()
?
struct Algo1
{
boost::detail::spinlock mSpreadLock;
Algo1() : mSpreadLock() {}
//called from thread 1
inline void operator()(const indata &signal)
{
if ( signal.action() == SEND )
{
double entryadd;
mSpreadLock.lock();
entryadd = mSpread * ENTRY_MULTIPLIER; //isnt it possible for compiler to optimize this before the lock?
mSpreadLock.unlock();
FunctionCall(entryadd);
}
}
//called from thread2
inline void operator()(const indata2 &bospread)
{
boost::detail::spinlock::scoped_lock mylock(mSpreadLock);
mSpread = bospread.spread();
}
}
What about this?
{
mSpreadLock.lock();
double entryadd = mSpread * ENTRY_MULTIPLIER;
mSpreadLock.unlock();
{
Would the definition of entryadd
me moved to top of function?
Unless im missing something.. seems that lock and unlock within a code block will not work. must use scoped_lock. boost::detail::spinlock::scoped_lock mylock(mSpreadLock)
, which will lock for the duration of the function call.
Of course I can just hack it like this: (but is less efficient)
inline void operator()(const indata &signal)
{
if ( signal.action() == SEND )
{
double entryadd;
{
boost::detail::spinlock::scoped_lock mylock(mSpreadLock);
entryadd = mSpread * ENTRY_MULTIPLIER;
}
FunctionCall(entryadd);
}
}
Solution
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow