Why does kube-proxy authenticate to the kube-api-server using service account instead of tls certificate
Solution 1:
To better understand this question it is worth reminding the basic Kubernetes components.
Basically we can divide cluster into:
- Master node: Control Plane, responsible for making global decisions about the cluster
- Worker Node(s) - responsible for running pods by providing the Kubernetes runtime environment.
Let's take a look at the "Node" (worker) section components:
kubelet
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod. The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn't manage containers which were not created by Kubernetes.
kube-proxy
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept. kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster. kube-proxy uses the operating system packet filtering layer if there is one and it's available. Otherwise, kube-proxy forwards the traffic itself.
Container runtime
The container runtime is the software that is responsible for running containers. Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).
As the kubelet
and Container runtime are two main components which are responsible for running pod and establishing connection to the Control Plane, they must be directly installed on the node's OS. It means also for kubelet
it must have TLS certificates that you mentioned in your question for ensuring secure connection to the Control Plane. What about kube-proxy
?
It could be installed in two ways during cluster provisioning - directly on the node's OS (for example, this way is used in the Kubernetes The Hard Way) or as the DaemonSet (kubeadm).
When kube-proxy
is installed directly it will have also separetly generated TLS ceritficates, just like kubelet
.
The second way, is the "DeamonSet" mentioned in your question. It means, instead of running as the OS deamon directly on the node, it will configured via DeamonSet deployment and it will be running as the pod on every node. Advantages over running on the OS directly:
- Using DeamonSet features we are ensuring that in case of the failure, this pod will be automatically re-created on the node
- less interference directly with the node OS - instead of generating new pair TLS certificates, we will just use ServiceAccount
Answering your question:
What is special here about Daemonsets regarding authentication?
To better understand it we can take a deeper look at the kube-proxy
configured via DaemonSets using cluster provisioned with kubeadm
. Based on Kubernetes docs:
A ServiceAccount for
kube-proxy
is created in thekube-system
namespace; then kube-proxy is deployed as a DaemonSet:
- The credentials (
ca.crt
andtoken
) to the control plane come from the ServiceAccount- The location (URL) of the API server comes from a ConfigMap
- The
kube-proxy
ServiceAccount is bound to the privileges in thesystem:node-proxier
ClusterRole
There are three points. Let's first check the first one:
The credentials - secret
Get service account name from the pod definition:
kubectl get daemonset kube-proxy -n kube-system -o yaml
...
serviceAccount: kube-proxy
serviceAccountName: kube-proxy
...
As, can see, it has assigned a Service Account called kube-proxy
:
Let's check it:
kubectl get sa kube-proxy -n kube-system -o yaml
Output:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2021-08-16T14:14:56Z"
name: kube-proxy
namespace: kube-system
resourceVersion: "259"
uid: (UID)
secrets:
- name: kube-proxy-token-2qhph
As can see, we are referring to secret named kube-proxy-token-2qhph
:
kubectl get secret kube-proxy-token-2qhph -n kube-system -o yaml
Output:
apiVersion: v1
data:
ca.crt: (APISERVER'S CA BASE64 ENCODED)
namespace: (NAMESPACE BASE64 ENCODED)
token: (BEARER TOKEN BASE64 ENCODED)
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: kube-proxy
kubernetes.io/service-account.uid: ...
creationTimestamp: "2021-08-16T14:14:56Z"
name: kube-proxy-token-2qhph
namespace: kube-system
resourceVersion: "256"
uid: (UID)
type: kubernetes.io/service-account-token
This secret contains:
The created secret holds the public CA of the API server and a signed JSON Web Token (JWT).
We are using this "JSON Web Token" bearer token for verifying requests:
A service account is an automatically enabled authenticator that uses signed bearer tokens to verify requests.
The signed JWT can be used as a bearer token to authenticate as the given service account. See above for how the token is included in a request. Normally these secrets are mounted into pods for in-cluster access to the API server, but can be used from outside the cluster as well.
For getting more information about bootstrap tokens I'd recommend reading following Kubernetes docs: Authenticating with Bootstrap Tokens, kubeadm token and Kubernetes RBAC 101: Authentication.
ConfigMap
By following similar steps as for getting the ServiceAccount name, we will get ConfigMap name which is mounted to the kube-proxy
pod:
kubectl get daemonset kube-proxy -n kube-system -o yaml
...
volumes:
- configMap:
defaultMode: 420
name: kube-proxy
...
Now, let's get ConfigMap definition:
kubectl get cm kube-proxy -n kube-system -o yaml
kubeconfig.conf: |-
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://10.230.0.12:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
This IP address in under server:
is API address so kube-proxy
knows it and can communicate with it.
There are also references to ca.rt
and token
that are mounted from kube-proxy-token-2qhph
secret.
ClusterRole
Let's check earlier mentioned ClusterRole - system:node-proxier
:
kubectl describe clusterrole system:node-proxier
Name: system:node-proxier
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
events [] [] [create patch update]
events.events.k8s.io [] [] [create patch update]
nodes [] [] [get list watch]
endpoints [] [] [list watch]
services [] [] [list watch]
endpointslices.discovery.k8s.io [] [] [list watch]
We can that this role can list and watch endpoints of the node,enpoints, serivces etc...
By describing ClusterRoleBinding kubeadm:node-proxier
we can confirm that role system:node-proxier
is used by kube-proxy
ServiceAccount:
kubectl describe clusterrolebinding kubeadm:node-proxier
Name: kubeadm:node-proxier
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: system:node-proxier
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount kube-proxy kube-system
For getting more details I'd recommend reading kubeadm
implementation details.
Answering your second question:
What is the point behind "Since the kubelet itself is loaded on each node, and is sufficient to start base services" ?
It just means that node established a connection with Control Plane (as kubelet
is component responsible for that), so Control Plane can start scheduling kube-proxy
pod on the node using predefined container runtime.