Inspecting pods in Pending status

When you deploy applications on Kubernetes, it is inevitable that soon or later you will need to get more information on your application. In this recipe, we will learn to inspect common pods problem of pods stuck in Pending status:

  1. In the /src/chapter8 folder, inspect the content of the mongo-sc.yaml file and deploy it running the following command. The deployment manifest includes MongoDB Statefulset with three replicas, Service and will get stuck in Pending state due mistake with a parameter and we will inspect it to find the source:
$ cat debug/mongo-sc.yaml
$ kubectl apply -f debug/mongo-sc.yaml
  1. List the pods by running the following command. You will notice that the status is Pending for the mongo-0 pod:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-0 0/2 Pending 0 3m
  1. Get additional information on the pods using the kubectl describe pod command and look for the Events section. In this case, Warning is pointing to an unbound PersistentVolumeClaim:
$ kubectl describe pod mongo-0
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m34s (x34 over 48m) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)

  1. Now that we know that we need to look at the PVC status, thanks to the results of the previous step, let's get the list of PVCs in order to inspect the issue. You will see that PVCs are also stuck in the Pending state:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongo-pvc-mongo-0 Pending storageclass 53m
  1. Get additional information on the PVCs using the kubectl describe pvc command, and look where the events are described. In this case, Warning is pointing to a missing storage class named storageclass:
$ kubectl describe pvc mongo-pvc-mongo-0
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 70s (x33 over 58m) persistentvolume-controller storageclass.storage.k8s.io "storageclass" not found

  1. List the storage classes. You will notice that you don't have the storage class named storageclass:
$ kubectl get sc
NAME PROVISIONER AGE
default kubernetes.io/aws-ebs 16d
gp2 kubernetes.io/aws-ebs 16d
openebs-cstor-default (default) openebs.io/provisioner-iscsi 8d
openebs-device openebs.io/local 15d
openebs-hostpath openebs.io/local 15d
openebs-jiva-default openebs.io/provisioner-iscsi 15d
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 15d
  1. Now we know that the manifest file we applied in step 1 used a storage class that does not exist. In this case, you can either create the missing storage class or edit the manifest to include an existing storage class to fix the issue.
    Let's create the missing storage class from an existing default storage class like shown in the example below gp2:
$ kubectl create -f sc-gp2.yaml

  1. List the pods by running the following command. You will notice that status is now Running for all pods that were previously Pending in step 2:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-0 2/2 Running 0 2m18s
mongo-1 2/2 Running 0 88s
mongo-2 2/2 Running 0 50s

You have successfully learned how to inspect why a pod is pending and fix it.