Mutual exclusion

To avoid ambiguity when two or more threads access the same resource, mutual exclusion implements serializing access to the shared resources. When one thread is using a resource, no other thread is allowed to access the same resource. All of the other threads are blocked from accessing the same resource until the resource is free again.

A mutex is basically a lock that is associated with the shared resource. To read or modify the shared resource, a thread must first acquire the lock for that resource. Once a thread acquires a lock (or mutex) for that resource, it can go ahead with processing that resource. All of the other threads that wish to work on it will be compelled to wait until the resource is unlocked. When the thread finishes its processing on the shared resource, it unlocks the mutex, enabling the other waiting threads to acquire a mutex for that resource. Aside from mutex, a semaphore is also used in process synchronization.

A semaphore is a concept that is used to avoid two or more processes from accessing a common resource in a concurrent system. It is basically a variable that is manipulated to only allow one process to have access to a common resource and implement process synchronization. A semaphore uses the signaling mechanism, that is, it invokes wait and signal functions, respectively, to inform that the common resource has been acquired or released. A mutex, on the other hand, uses the locking mechanism—the process has to acquire the lock on the mutex object before working on the common resource.

Although mutex helps to manage shared resources among threads, there is a problem. An application of mutex in the wrong order may lead to a deadlock. A deadlock occurs in a situation when a thread that has lock X tries to acquire lock Y to complete its processing, while another thread that has lock Y tries to acquire lock X to finish its execution. In such a situation, a deadlock will occur, as both of the threads will keep waiting indefinitely for the other thread to release its lock. As no thread will be able to finish its execution, no thread will be able to free up its locks, either. One solution to avoid a deadlock is to let threads acquire locks in a specific order.

The following functions are used to create and manage threads:

Depending on the operating system, a lock may be a spinlock. If any thread tries to acquire a lock but the lock is not free, a spinlock will make the thread wait in a loop until the lock becomes free. Such locks keep the thread busy while it's waiting for the lock to free up. They are efficient, as they avoid the consumption of time and resources in process rescheduling or context switching.

That is enough theory. Now, let's start with some practical examples!