EKS ARM Node stuck in NotReady status - runtime network not ready cni config uninitialized

Solution 1:

Ok - as @thomas suggested the issue was related to the EKS addons.

For context and as I said in my comment, the cluster was initially created at 1.14 version and was later upgraded to 1.16.

However, the aws-node, kube-proxy, and coredns add-ons were never upgraded. Followed the instructions here but the issue remained.

What I did notice though was that the aws-node was still using the same CNI image (v1.6.3)

kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2

After further investigation I had to manually upgrade the CNI version following the instructions here

Lastly, I noticed that an aws-node pod was created for my arm64 node - which previously it didn't. However, the liveness probe for the pod was failing and the node was still stuck in NotReady status. So I had to edit the configuration for the kube-proxy daemon set as described in step (3) of this guide.