There are times when you want to perform tasks that have nothing to do with the UI or interact with the UI as well as doing other tasks that take up a lot of time. For instance, you might want to download an image and display it to the user after it is downloaded. The downloading process has absolutely nothing to do with the UI.
For any task that doesn’t involve the UI, you can use global
concurrent queues in GCD. These allow either synchronous or asynchronous
execution. But synchronous execution
does not mean your program waits for the code to
finish before continuing. It simply means that the concurrent queue will
wait until your task has finished before it continues to the next block of
code on the queue. When you put a block object on a concurrent queue, your
own program always continues right away without
waiting for the queue to execute the code. This is because concurrent
queues, as their name implies, run their code on threads other than the
main thread. (There is one exception to this: when a task is submitted to
a concurrent or a serial queue using the dispatch_sync
function, iOS will, if possible,
run the task on the current thread, which
might be the main thread, depending on where the code
path is at the moment. This is an optimization that has been programmed on
GCD, as we shall soon see.)
If you submit a task to a concurrent queue synchronously, and at the same time submit another synchronous task to another concurrent queue, these two synchronous tasks will run asynchronously in relation to each other because they are running two different concurrent queues. It’s important to understand this because sometimes, as we’ll see, you want to make sure task A finishes before task B starts. To ensure that, submit them synchronously to the same queue.
You can perform synchronous tasks on a dispatch queue using the
dispatch_sync
function. All you have to
do is to provide it with the handle of the queue that has to run the task
and a block of code to execute on that queue.
Let’s look at an example. It prints the integers 1 to 1000 twice, one complete sequence after the other, without blocking the main thread. We can create a block object that does the counting for us and synchronously call the same block object twice:
void (^printFrom1To1000)(void) = ^{ NSUInteger counter = 0; for (counter = 1; counter <= 1000; counter++){ NSLog(@"Counter = %lu - Thread = %@", (unsigned long)counter, [NSThread currentThread]); } };
Now let’s go and invoke this block object using GCD:
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); dispatch_sync(concurrentQueue, printFrom1To1000); dispatch_sync(concurrentQueue, printFrom1To1000);
If you run this code, you might notice the counting taking place on
the main thread, even though you’ve asked a concurrent queue to execute
the task. It turns out this is an optimization by GCD. The dispatch_sync
function will use the current
thread—the thread you’re using when you dispatch the task—whenever
possible, as a part of an optimization that has been programmed into GCD.
Here is what Apple says about it:
As an optimization, this function invokes the block on the current thread when possible. | ||
--Grand Central Dispatch (GCD) Reference |
To execute a C function instead of a block object, synchronously, on
a dispatch queue, use the dispatch_sync_f
function. Let’s simply translate
the code we’ve written for the printFrom1To1000
block object to its equivalent
C function, like so:
void printFrom1To1000(void *paramContext){ NSUInteger counter = 0; for (counter = 1; counter <= 1000; counter++){ NSLog(@"Counter = %lu - Thread = %@", (unsigned long)counter, [NSThread currentThread]); } }
And now we can use the dispatch_sync_f
function to execute the printFrom1To1000
function on a concurrent queue,
as demonstrated here:
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); dispatch_sync_f(concurrentQueue, NULL, printFrom1To1000); dispatch_sync_f(concurrentQueue, NULL, printFrom1To1000);
The first parameter of the dispatch_get_global_queue
function specifies the
priority of the concurrent queue that GCD has to retrieve for the
programmer. The higher the priority, the more CPU timeslices will be
provided to the code getting executed on that queue. You can use any of
these values for the first parameter to the dispatch_get_global_queue
function:
DISPATCH_QUEUE_PRIORITY_LOW
Fewer timeslices will be applied to your task than normal tasks.
DISPATCH_QUEUE_PRIORITY_DEFAULT
The default system priority for code execution will be applied to your task.
DISPATCH_QUEUE_PRIORITY_HIGH
More timeslices will be applied to your task than normal tasks.
The second parameter of the dispatch_get_global_queue
function is reserved
and you should always pass the value 0 to it.
In this section you saw how you can dispatch tasks to concurrent queues for synchronous execution. The next section shows asynchronous execution on concurrent queues, while Constructing Your Own Dispatch Queues will show how to execute tasks synchronously and asynchronously on serial queues that you create for your applications.