So that’s nice. We see that as soon as the deployment notices our pod died, it starts a new one. That goes back to the reconciliation loop we talked about; Kubernetes is always trying to ensure our desired state matches our current state. If it doesn’t, like when the actual number of pods we had running went from 1 to 0 because we killed it, it will take the necessary actions to return to a stable state. In this case, that means starting a new pod.
Lastly, we can kill our pod either because we don’t need it anymore or to simulate a failure. We can do that in two different ways.
First, killing it by name:
12
kubectl delete pod nginx
# pod "nginx" deleted
Kubernetes command to delete the `nginx` pod
Or by sending the same manifest file we used to create the pod:
12
kubectl delete -f nginx.yaml
# pod "nginx" deleted
Another way to delete the nginx pod
Now, if you try to list your pods, nginx should be gone:
12
kubectl get pods
# No resources found.
Getting all the pods
Although that’s what we expect from this example, it also exposes a problem we have. If our application crashes for some reason(and it will crash, eventually), it’s not automatically rescheduled.
For that reason, we will not usually create pods directly, like we did here. Instead, use a higher-level object called Deployment to create and manage our pods as we will see in the next chapter.