Kickstarting Kubernetes

According to the Kubernetes website: 

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

It is a simple and powerful tool for automatic deployment, scaling, and managing containerized applications. It provides zero downtime when you roll out a newer application or update an existing application. You can automate it to scale in and out based on certain factors. It also provides self-healing, such that Kubernetes automatically detects the failing application and spins up a new instance. We can also define secrets and configuration that can be used across instances.

Kubernetes primarily focuses on zero downtime production applications upgrades, and also scales them as required. 

A single deployable component is called a pod in Kubernetes. This can be as simple as a running process in the container. A group of pods can be combined together to form a deployment

Similar to docker-compose, we can define the applications and their required services in a single YAML file or multiple files (as per our convenience).

Here also, we start with an apiVersion in a Kubernetes deployment file.

The following code is a sample Kubernetes file that will start a Nginx server:

apiVersion: v1
kind: Service
metadata:
name: nginxsvc
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
app: nginx

Followed by the type, which takes either a pod, deployment, namespace, ingress (load balancing the pods), role, and many more.

Ingress forms a layer between the services and the internet so that all the inbound connections are controlled or configured with the ingress controller before sending them to Kubernetes services on the cluster. On the other hand, the egress controller controls or configures services going out of the Kubernetes cluster.

This is followed by the metadata information, such as the type of environments, the application name (nginxsvc), and labels (Kubernetes uses this information to identify and segregate the pods). Kubernetes uses this metadata information to identify the particular pods or a group of pods, and we can manage the instances with this metadata. This is one of the key differences with docker-compose, where docker-compose doesn't have the flexibility of defining the metadata about the containers.

This is followed by the spec, where we define the specification of the images or our application. We can also define the pull strategy for our images as well as define the environment variables along with the exposed ports. We can define the resource limitations on the machine (or VM) for a particular service. They provide health checks, that is, each service is monitored for the health and when some services fail, they are immediately replaced by newer ones. They also provide service discovery out of the box, by assigning each pod an IP, which makes it easier for the services to identify and interact with them. They also provide a better dashboard, to visualize your architecture and the status of the application. You can do most of the management via this dashboard, such as checking the status, logs, scale up, or down the services, and so on.

Since Kubernetes provide a complete orchestration of our services with configurable options, it makes it really hard to set up initially, and this means it is not ideal for a development environment. We also need the kubectl CLI tool for management. Despite the fact that we use Docker images inside, the Docker CLI can't be used.

There is also Minikube (minified Kubernetes), which is used for developing and testing applications locally.

Kubernetes not only takes care of containerizing your application, it also helps to scale, manage, and deploy your application. It orchestrates your entire application deployment. Additionally, it also provides service discovery and automated health checks.

We will focus more on the Kubernetes sub-generator in the following chapter.