It is possible for Pods and the applications they’re running to crash or fail. Kubernetes can attempt to self-heal a situation like this by starting a new Pod to replace the failed one.
Use kubectl delete pod to manually delete one of the Pods (refer to the previous kubectl get pods output for a list of Pod names).
$ kubectl delete pod qsk-deploy-69996c4549-r59nl
pod "qsk-deploy-69996c4549-r59nl" deleted
As soon as the Pod is deleted, the number of Pods on the cluster will drop to 4 and no longer match the desired state of 5. The Deployment controller will notice this and automatically start a new Pod to take the observed number of Pods back to 5.
List the Pods again to see if a new Pod has been started.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
qsk-deploy-69996c4549-mwl7f 1/1 Running 0 20m
qsk-deploy-69996c4549-9xwv8 1/1 Running 0 20m
qsk-deploy-69996c4549-ksg8t 1/1 Running 0 20m
qsk-deploy-69996c4549-qmxp7 1/1 Running 0 20m
qsk-deploy-69996c4549-hd5pn 1/1 Running 0 5s
Congratulations! There are 5 Pods running, and Kubernetes performed the self-healing without needing help from you.
Over here, notice how the last Pod in the list has only been running for 5 seconds. This is when the replacement Pod Kubernetes started to reconcile the desired state.