While entire books are devoted to answering this question, even they can’t cover everything Kubernetes can do. For our purposes in this book, I will touch upon the information you need to know to have a working knowledge of Kubernetes, enough to continue our journey and deploy and operate our service. Why Kubernetes? Kubernetes is ubiquitous, it’s available on all cloud platforms, and it’s as close to a standard as we have for deploying distributed services.
Kubernetes[53] is an open source orchestration system for automating deployment, scaling, and operating services running in containers. You tell Kubernetes what to do by using its REST API to create, update, and delete resources that Kubernetes knows how to handle. Kubernetes is a declarative system in that you describe the end-goal state you want and Kubernetes runs the changes to take your system from its current state to your end-goal state.
The Kubernetes resource that people most commonly see are pods, the smallest deployable unit in Kubernetes. Think of containers as processes and pods as hosts—all containers running in a pod share the same network namespace, the same IP address, and the same interprocess communication (IPC) namespace, and they can share the same volumes. These are logical hosts because a physical host (what Kubernetes calls a node) may run multiple pods. The other resources you’ll work with either configure pods (ConfigMaps, Secrets) or manage a pod set (Deployments, StatefulSets, DaemonSets). You can extend Kubernetes by creating your own custom resources and controllers.
Controllers are control loops that watch the state of your resources and make changes where needed. Kubernetes itself is made up of many controllers. For example, the Deployment controller watches your Deployment resources; if you increase the replicas on a Deployment, the controller will schedule more pods.
To interact with Kubernetes, you’ll need its command-line tool, kubectl, which we’ll look at next.