Write starvation

When designing an application, this kind of mutex is not always the obvious choice, because in a scenario where there is a greater number of read locks and a few write ones, the mutex will be accepting incoming more read locks after the first, letting the write operation wait for a moment where there are no read locks active. This is a phenomenon referred to as write starvation.

To check this out, we can define a type that has both a write and a read operation, which take some time, as shown in the following code:

type counter struct {
m sync.RWMutex
value int
}

func (c *counter) Write(i int) {
c.m.Lock()
time.Sleep(time.Millisecond * 100)
c.value = i
c.m.Unlock()
}

func (c *counter) Value() int {
c.m.RLock()
time.Sleep(time.Millisecond * 100)
a := c.value
c.m.RUnlock()
return a
}

We can try to execute both write and read operations with the same cadence in separate goroutines, using a duration that is lower than the execution time of the methods (50 ms versus 100 ms). We will also check out how much time they spend in a locked state:

var c counter
t1 := time.NewTicker(time.Millisecond * 50)
time.AfterFunc(time.Second*2, t1.Stop)
for {
select {
case <-t1.C:
go func() {
t := time.Now()
c.Value()
fmt.Println("val", time.Since(t))
}()
go func() {
t := time.Now()
c.Write(0)
fmt.Println("inc", time.Since(t))
}()
case <-time.After(time.Millisecond * 200):
return
}
}

If we execute the application, we see that for each write operation, more than one read is executed, and each next call is spending more time than the previous, waiting for the lock. This is not true for the read operation, which can happen at the same time, so as soon as a reader manages to lock the resource, all the other waiting readers will do the same. Replacing RWMutex with Mutex will make both operations have the same priority, as in the previous example.