Ungracefully terminating pods

To test that the high-availability mode does what it should, I need to kill the instances during testing, and I need to kill them in such way that they can't report they are disconnecting or similar.

For practical reasons I need to do the testing on smaller environment that does not have a separate node for each pod, so I can't test by turning off the machines, but need to kill the processes instead. I can kill them with docker kill, but that requires logging into the node and finding the docker id of the container. Is there some way to achieve similar effect via exec kill, but sending SIGKILL to process ID 1 is not allowed.

I also see delete being used, but I have some case where there is a difference: when deleting, the container is recreated with clean state, but merely restarting it does not and the deployment I am testing actually looks at the state during start-up and has problem starting up, so I need to test the case where it is not deleted.

Can I forcibly terminate pods without giving them any chance for cleaning up via the kubernetes API/kubectl?


You can try to use command

kubectl delete pods <pod> --grace-period=0 --force

The key --grace-period=<seconds> is time, which Kubernetes waits for graceful shutdown of a Pod. If it is 0, SIGKILL will be sent immediately to any process in the Pod. The key --force must be specified for such kind of operation in versions of Kubernetes 1.5 and higher.

For more information, you can check the official documentation.