In anticipation of the Links 'R' Us project becoming an overnight success and attracting a lot of traffic, especially after posting a link on sites such as Hacker News and Slashdot, we need to come up with a reasonable plan for scaling. Even though we are currently dealing with a monolithic application, we can always scale horizontally by spinning up additional instances. Moreover, as our link graph size grows, we will undoubtedly need additional compute resources for both our web crawlers and our PageRank calculator.
One of the key benefits of using a container orchestration platform such as Kubernetes is that we can effortlessly scale up (or down) any deployed application. As we saw at the beginning of this chapter, a Service resource connected to an Ingress can act as a load balancer and distribute incoming traffic to our application. This transparently takes care of our frontend scaling issues with no additional development effort on our end.
On the other hand, making sure that each application instance crawls a specific subset of the graph isn't straightforward as it requires application instances to coordinate with each other. This implies that we need to establish a communication channel between the individual instances. Or does it?