Mike Ash Singleton: Placing @synchronized
-
19-09-2019 - |
Question
I came accross this on the Mike Ash "Care and feeding of singletons" and was a little puzzeled by his comment:
This code is kind of slow, though. Taking a lock is somewhat expensive. Making it more painful is the fact that the vast majority of the time, the lock is pointless. The lock is only needed when foo is nil, which basically only happens once. After the singleton is initialized, the need for the lock is gone, but the lock itself remains.
+(id)sharedFoo {
static Foo *foo = nil;
@synchronized([Foo class]) {
if(!foo) foo = [[self alloc] init];
}
return foo;
}
My question is, and there is no doubt a good reason for this but why can't you write (see below) to limit the lock to when foo is nil?
+(id)sharedFoo {
static Foo *foo = nil;
if(!foo) {
@synchronized([Foo class]) {
foo = [[self alloc] init];
}
}
return foo;
}
cheers gary
Solution
Because then the test is subject to a race condition. Two different threads might independently test that foo
is nil
, and then (sequentially) create separate instances. This can happen in your modified version when one thread performs the test while the other is still inside +[Foo alloc]
or -[Foo init]
, but has not yet set foo
.
By the way, I wouldn't do it that way at all. Check out the dispatch_once()
function, which lets you guarantee that a block is only ever executed once during your app's lifetime (assuming you have GCD on the platform you're targeting).
OTHER TIPS
This is called the double checked locking "optimization". As documented everywhere this is not safe. Even if it's not defeated by a compiler optimization, it will be defeated the way memory works on modern machines, unless you use some kind of fence/barriers.
Mike Ash also shows the correct solution using volatile
and OSMemoryBarrier();
.
The issue is that when one thread executes foo = [[self alloc] init];
there is no guarantee that when an other thread sees foo != 0
all memory writes performed by init
is visible too.
Also see DCL and C++ and DCL and java for more details.
In your version the check for !foo
could be occurring on multiple threads at the same time, allowing two threads to jump into the alloc
block, one waiting for the other to finish before allocating another instance.
You can optimize by only taking the lock if foo==nil, but after that you need to test again (within the @synchronized) to guard against race conditions.
+ (id)sharedFoo {
static Foo *foo = nil;
if(!foo) {
@synchronized([Foo class]) {
if (!foo) // test again, in case 2 threads doing this at once
foo = [[self alloc] init];
}
}
return foo;
}
Best way if you have grand cenral dispatch
+ (MySingleton*) instance {
static dispatch_once_t _singletonPredicate;
static MySingleton *_singleton = nil;
dispatch_once(&_singletonPredicate, ^{
_singleton = [[super allocWithZone:nil] init];
});
return _singleton
}
+ (id) allocWithZone:(NSZone *)zone {
return [self instance];
}