Question

//my_serial_queue is a serial_dispatch_queue

dispatch_async(my_serial_queue, ^{

    //access a shared resource such as a bank account balance
    [self changeBankAccountBalance];

});

If I submit 100 tasks that each access and mutate a bank account balance, I understand that a serial queue will execute each task sequentially, but do these tasks finish sequentially as well when using dispatch_async?

What if task #23 that I submit asynchronously to the serial queue takes a really long time to finish? Would task #24 start only when task #23 is done, or would task #24 begin before task #23 is done? If so, couldn't task #24 have the wrong bank account balance when it starts its job, thereby screwing up data integrity?

Thanks!!

Was it helpful?

Solution 2

man dispatch_queue_create says: “All memory writes performed by a block dispatched to a serial queue are guaranteed to be visible to subsequent blocks dispatched to the same queue.” Thus, serial queues are a good way to serializing access to mutable state to avoid race conditions.

do these tasks finish sequentially as well when using dispatch_async?

Yes. The queue dictates the execution policy, not how you queue the block.

In other words, if the queue is serial, queueing with async or sync doesn't change that. The only difference is: do I wait for this block to complete before continuing execution of the rest of the program? dispatch_async=no, dispatch_sync=yes.

What if task #23 that I submit asynchronously to the serial queue takes a really long time to finish?

Nothing changes. A serial queue always waits for the previously dequeued block (#23) to complete before dequeuing and executing the next block (#24). If stalling the queue is a concern, you should implement a timeout inside your block code.

OTHER TIPS

Yes, a dedicated serial queue is a wonderful way to synchronize access to some resource shared amongst multiple threads. And, yes, with a serial queue, each task will wait for the prior one to complete.

Two observations:

  1. While this sounds like a highly inefficient process, this is implicitly at the heart of any synchronization technique (whether queue-based or lock-based approach) whose goal is to minimize concurrent updates of a shared resource.

    But in many scenarios the serial queue technique can yield significantly better performance than than other common techniques, such as simple mutex lock, NSLock, or the @synchronized directive. For a discussion on the alternative synchronization techniques, see the Synchronization section of the Threading Programming Guide. For a discussion about using queues in lieu of locks, see the Eliminating Lock-Based Code in the Migrating Away from Threads section of the Concurrency Programming Guide.

  2. A variation of the serial queue pattern is to use the "reader-writer" pattern, where you create a GCD concurrent queue:

    queue = dispatch_queue_create("identifier", DISPATCH_QUEUE_CONCURRENT);
    

    You then perform reads using dispatch_sync, but you perform writes using dispatch_barrier_async. The net effective is to permit concurrent read operations, but ensure that writes are never performed concurrently.

    If your resource permits concurrent reads, then the reader-writer pattern can offer further performance gain over that of a serial queue.

So, in short, while it seems inefficient to have task #24 wait for task #23, that is inherent in any synchronization technique where you strive to minimize concurrent updates of the shared resource. And GCD serial queues are a surprisingly efficient mechanism, often better than many simple locking mechanisms. The reader-writer pattern can, in some circumstance, offer even further performance improvements.


My original answer, below, was in response to the original question which was confusing titled "how does a serial dispatch queue guarantee concurrency?" In retrospect, this was just an accidental use of the wrong terms.


This is an interesting choice of words, "how does a serial dispatch queue guarantee concurrency?"

There are three types of queues, serial, concurrent, and the main queue. A serial queue will, as the name suggests, not start the next dispatched block until the prior one finished. (Using your example, this means that if task 23 takes a long time, it won't start task 24 until it's done.) Sometimes this is critical (e.g. if task 24 is dependent upon the results of task 23 or if both task 23 and 24 are trying to access the same shared resource).

If you want these various dispatched tasks to run concurrently with respect to each other, you use a concurrent queue (either one of the global concurrent queues that you get via dispatch_get_global_queue, or you can create your own concurrent queue using dispatch_queue_create with the DISPATCH_QUEUE_CONCURRENT option). In a concurrent queue, many of your dispatched tasks may run concurrently. Using concurrent queues requires some care (notably the synchronization of shared resources), but can yield significant performance benefits when implemented properly.

And as a compromise between these two approaches, you can use operation queues, which can be both concurrent, but in which you can also constrain how many of the operations on the queue will run concurrently at any given time by setting maxConcurrentOperationCount. A typical scenario where you'll use this is when doing background network tasks, where you do not want more than five concurrent network requests.

For more information, see the Concurrency Programming Guide.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top