What You Learned

Now when our servers discover other servers, they replicate each other’s data. That’s a problem with our replication implementation: when one server discovers another, they replicate each other in a cycle! You can verify it by adding this code at the bottom of your test:

 consumeResponse, err = leaderClient.Consume(
  context.Background(),
  &api.ConsumeRequest{
  Offset: produceResponse.Offset + 1,
  },
 )
 require.Nil(t, consumeResponse)
 require.Error(t, err)
 got := grpc.Code(err)
 want := grpc.Code(api.ErrOffsetOutOfRange{}.GRPCStatus().Err())
 require.Equal(t, got, want)

We only produced one record to our service, and yet we’re able to consume multiple records from the original server because it’s replicated data from another server that replicated its data from the original server. No, Leo, we do not need to go deeper.

I mentioned that in the next chapter we’ll work on coordinating the servers so that they’d have a defined leader-follower relationship so that only the followers would replicate the leader. We also want to control the number of replicas. Typically in a production deployment, three replicas is ideal: you could lose two and still not lose data, and with only three you won’t be storing more data than necessary.

So let’s work on building consensus with Raft and coordinating the nodes in our cluster.

Footnotes

[41]

https://en.wikipedia.org/wiki/Trust_boundary

[42]

https://dave.cheney.net/2013/01/19/what-is-the-zero-value-and-why-is-it-useful

[43]

https://github.com/robpike/ivy

[44]

https://github.com/hashicorp/consul

[45]

https://golang.org/pkg/net/#Listen

[46]

https://en.wikipedia.org/wiki/KISS_principle

[47]

https://en.wikipedia.org/wiki/Black-box_testing