Kubernetes pod /etc/resolv.conf has the wrong nameserver

I have a 4-node cluster setup at home that I am playing with, and ran into a problem when I started trying to do pod-to-pod communications. I used Kubespray to install the nodes (1 "server/controller" and 3 "nodes").

The issue is that I can't resolve the services by name, only IP. For instance, I used Helm to spin up Jeknins in the default namespace with the service name "jenkins", but if I try to ping "jenkins" or "jenkins.default" it doesn't resolve. Doing dig jenkins or dig jenkins.default in a dnsutils pod produces:

/ # dig jenkins.default

; <<>> DiG 9.11.6-P1 <<>> jenkins.default
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 8927
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 229fbf94bb25564ae66dc8f45e7f646e5abb5f9dd4ede1d7 (good)
;; QUESTION SECTION:
;jenkins.default.       IN  A

;; Query time: 0 msec
;; SERVER: 169.254.25.10#53(169.254.25.10)
;; WHEN: Sat Mar 28 14:51:26 UTC 2020
;; MSG SIZE  rcvd: 72

Checking the /etc/resolv.conf file in the dnsutils pod, I noticed it had a strange IP address set for nameserver: 169.254.25.10. After looking at all the pods, it seems they all have that same config, but the coredns service is set to 10.233.0.3. In fact, all of the IPs are 10. something. Manually changing /etc/resolv.conf on the dnsutils pod to use the 10.233.0.3 for the nameserver seemed to correct it for that pod, but how do I fix it for ALL pods? And where did that 169.254.25.10 IP come from anyhow? My actual network DNS server is 10.0.0.5, and I have no 169.254 IPs in my internal network, as far as I know.


As we can read from Kubernetes dosc Customizing DNS Service:

If a Pod’s dnsPolicy is set to “default”, it inherits the name resolution configuration from the node that the Pod runs on. The Pod’s DNS resolution should behave the same as the node.

If you don’t want this, or if you want a different DNS config for pods, you can use the kubelet’s --resolv-conf flag. Set this flag to “” to prevent Pods from inheriting DNS. Set it to a valid file path to specify a file other than /etc/resolv.conf for DNS inheritance.

As for Pod’s DNS Policy are as follows:

DNS policies can be set on a per-pod basis. Currently Kubernetes supports the following pod-specific DNS policies. These policies are specified in the dnsPolicy field of a Pod Spec.

  • Default“: The Pod inherits the name resolution configuration from the node that the pods run on. See related discussion for more details.
  • ClusterFirst“: Any DNS query that does not match the configured cluster domain suffix, such as “www.kubernetes.io”, is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured. See related discussion for details on how DNS queries are handled in those cases.
  • ClusterFirstWithHostNet“: For Pods running with hostNetwork, you should explicitly set its DNS policy “ClusterFirstWithHostNet”.
  • None“: It allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided using the dnsConfig field in the Pod Spec. See Pod’s DNS config subsection below.

169.254.0.0/16 is link local address. i mostly found this address get assigned when the link goes down. more info - https://en.wikipedia.org/wiki/Link-local_address.