How to bring K3s server & pods up again after k3s-killall.sh

I am having K3s cluster with system pods (kube-system namespace) & my application pods (xyz-system namespace) running.

I want to stop all of the K3s pods & reset the containerd state, so I used /usr/local/bin/k3s-killall.sh script and all pods got stopped (at least I was not able to see anything in watch kubectl get all -A. K3s and kubectl are still installed, as I can see k3s -v output.

Can someone tell me how to start the k3s server up again because now after firing kubectl get all -A I am getting message The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?

PS: - When I ran k3s server command, for fraction of seconds I can see the same above pods(with same pod ids) that I mentioned while the command is running. After few seconds, command get exited and again same message The connection to the... start displaying. Does this means that k3s-killall.sh have not deleted my pods as it is showing the same pods with same ids ( like pod/some-app-xxx )?


I have tested 1.21 - The killall script cleans up containers, K3s directories, and networking components while also removing the iptables chain with all the associated rules. The cluster data is not be deleted.

POD-IDS will remain the same. You can also check this so question for how to restart k3s after killall script:

Summariing:

  1. run sudo systemctl restart k3s
  2. run k3s -v, and check for restarts count
  3. review pod status