How it works...

Our deployed system is shown in the following diagram:

Note that the configuration is very similar to our local setup, except we have now deployed our system to the AWS cloud. Kubernetes is running on three AWS machine instances: one master and two worker nodes. Our containers are distributed across these nodes; in fact, we don't even care too much about what instances they are running on as Kubernetes manages workload and distribution for us.

The kops tool is powerful. It allows us to manage multiple Kubernetes clusters and takes a lot of the grunt work out of doing so. The kops tool stores its configuration (known as the kops state store) in an S3 bucket so that it's centrally available.

Both kops and Kubernetes use the information in the state store to configure and update the cluster and underlying infrastructure. To edit the contents of the state store, we should always use the kops interface, rather than editing the files on S3 directly.

For example, to edit the cluster information we could run this command:

$ kops edit cluster 

The kops tool uses a set of rules to specify sensible default values for the Kubernetes cluster, hiding a lot of the detailed configuration.

While this is useful to spin up a staging cluster, it is advisable to fully understand the configuration at a more detailed level when implementing production-grade clusters, particularly with regard to system security.

To view a list of all of the resources that kops created for us, run the delete command without the --yes flag:

$ kops delete cluster <cluster name> 

Omitting the --yes flag runs the command in preview mode. This provides a list of all the AWS resources. We can inspect these resources in the AWS control panel. For example, in the EC2 panel we can observe that three instances were started, one cluster master and two workers.

In our previous recipes, we used NodePort as the service type; this is because minikube does not support the LoadBalancer service type. However, in our full Kubernetes cluster, this maps onto an AWS ELB, which we can again inspect in the EC2 control panel.

Throughout this chapter and Chapter 10, Building Microservice Systems, we have been working with the same codebase. The codebase has been deployed in development mode using the fuge tool to a local minikube cluster, and now to a full-blown Kubernetes cluster on AWS - WITHOUT CHANGE! That is, on the whole, we have not needed to provide separate environment configurations for dev, test, staging, and so forth.

This is because the code was developed to use the same service discovery mechanism in all environments. We also harnessed the power of containers not only for deployment, but also in development to provide our MongoDB and Redis databases.

The kops CLI is just one tool that helps us to automate Kubernetes cluster deployment. Others include the following:

Finally, the long form way is to install Kubernetes from scratch: https://kubernetes.io/docs/getting-started-guides/scratch/.