Question

When selecting which queue to run dispatch_async on, dispatch_get_global_queue is mentioned a lot. Is this one special background queue that delegates tasks to a certain thread? Is it almost a singleton?

So if I use that queue always for my dispatch_async calls, will that queue get full and have to wait for things to finish before another one can start, or can it assign other tasks to different threads?

I guess I'm a little confused because when I'm choosing the queue for an NSOperation, I can choose the queue for the main thread with [NSOperationQueue mainQueue], which seems synonymous to dispatch_get_main_queue but I was under the impression background queues for NSOperation had to be individually made instances of NSOperationQueue, yet GCD has a singleton for a background queue? (dispatch_get_global_queue)

Furthermore - silly question but wanted to make sure - if I put a task in a queue, the queue is assigned to one thread, right? If the task is big enough it won't split it up over multiple threads, will it?

Was it helpful?

Solution

When selecting which queue to run dispatch_async on, dispatch_get_global_queue is mentioned a lot. Is this one special background queue that delegates tasks to a certain thread?

A certain thread? No. dispatch_get_global_queue retrieves for you a global queue of the requested relative priority. All queues returned by dispatch_get_global_queue are concurrent, and may, at the system's discretion, dispatch work to many different threads. The mechanics of this are an implementation detail that is opaque to you as a consumer of the API.

In practice, and at the risk of oversimplifying it, there is one global queue for each priority level, and at the time of this writing, based on my experience, each of those will at any given time be dispatching work to between 0 and 64 threads.

Is it almost a singleton?

Strictly no, but you can think of them as singletons where there is one singleton per priority level.

So if I use that queue always for my dispatch_async calls, will that queue get full and have to wait for things to finish before another one can start, or can it assign other tasks to different threads?

It can get full. Practically speaking, if you are saturating one of the global concurrent queues (i.e. more than 64 background tasks of the same priority in flight at the same time), you probably have a bad design. (See this answer for more details on queue width limits)

I guess I'm a little confused because when I'm choosing the queue for an NSOperation, I can choose the queue for the main thread with [NSOperationQueue mainQueue], which seems synonymous to dispatch_get_main_queue

They are not strictly synonymous. Although NSOperationQueue uses GCD under the hood, there are some important differences. For instance, in a single pass of the main run loop, only one operation enqueued to +[NSOperationQueue mainQueue] will be executed, whereas more than one block submitted to dispatch_get_main_queue might be executed on a single run loop pass. This probably doesn't matter to you, but they are not, strictly speaking, the same thing.

but I was under the impression background queues for NSOperation had to be individually made instances of NSOperationQueue, yet GCD has a singleton for a background queue? (dispatch_get_global_queue)

In short, yes. It sounds like you're conflating GCD and NSOperationQueue. NSOperationQueue is not just a "trivial wrapper" around GCD, it's its own thing. The fact that it's implemented on top of GCD should not really matter to you. NSOperationQueue is a task queue, with an explicitly settable width, that you can create instances of "at will." You can make as many of them as you like. At some point, all instances of NSOperationQueue are, when executing NSOperations, pulling resources from the same pool of system resources as the rest of your process, including GCD, so yes, there are some interactions there, but they are opaque to you.

Furthermore - silly question but wanted to make sure - if I put a task in a queue, the queue is assigned to one thread, right? If the task is big enough it won't split it up over multiple threads, will it?

A single task can only ever be executed on a single thread. There's not some magical way that the system would have to "split" a monolithic task into subtasks. That's your job. With regard to your specific wording, the queue isn't "assigned to one thread", the task is. The next task from the queue to be executed might be executed on a completely different thread.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top