kubectl cannot authenticate with AWS EKS
Solution 1:
I needed to add my IAM user to the mapUsers
section of the ConfigMap configmap/aws-auth
, per these AWS docs.
You can edit the configmap using the same AWS user that initially created the cluster.
$ kubectl edit -n kube-system configmap/aws-auth
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::555555555555:user/admin
username: admin
groups:
- system:masters
- userarn: arn:aws:iam::111122223333:user/ops-user
username: ops-user
groups:
- system:masters
mapAccounts: |
- "111122223333"
Solution 2:
Unfortunately, AWS doesn't yet have a command like GKE's "gcloud container clusters get-credentials", which creates kubectl config for you. So, you need to create kubectl config file manually.
As mentioned in creating a kubeconfig for Amazon EKS document, you should get two things from the cluster:
-
Retrieve the endpoint for your cluster. Use this for the
<endpoint-url>
in your kubeconfig file.aws eks describe-cluster --cluster-name <cluster-name> --query cluster.endpoint
-
Retrieve the certificateAuthority.data for your cluster. Use this for the
<base64-encoded-ca-cert>
in your kubeconfig file.aws eks describe-cluster --cluster-name <cluster-name> --query cluster.certificateAuthority.data
Create the default kubectl folder if it does not already exist.
mkdir -p ~/.kube
Open your favorite text editor and paste the following kubeconfig code block into it.
apiVersion: v1
clusters:
- cluster:
server: <endpoint-url>
certificate-authority-data: <base64-encoded-ca-cert>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: heptio-authenticator-aws
args:
- "token"
- "-i"
- "<cluster-name>"
# - "-r"
# - "<role-arn>"
# env:
# - name: AWS_PROFILE
# value: "<aws-profile>"
Replace the <endpoint-url>
with the endpoint URL that was created for your cluster.
Replace the <base64-encoded-ca-cert>
with the certificateAuthority.data that was created for your cluster.
Replace the <cluster-name>
with your cluster name.
Save the file to the default kubectl folder, with your cluster name in the file name. For example, if your cluster name is devel, save the file to ~/.kube/config-devel
.
Add that file path to your KUBECONFIG
environment variable so that kubectl
knows where to look for your cluster configuration.
export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel
(Optional) Add the configuration to your shell initialization file so that it is configured when you open a shell.
For Bash shells on macOS:
echo 'export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel' >> ~/.bash_profile
For Bash shells on Linux:
echo 'export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel' >> ~/.bashrc
Test your configuration.
kubectl get svc
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 1m
Note
If you receive the error "heptio-authenticator-aws": executable file not found in $PATH
, then your kubectl
is not configured for Amazon EKS. For more information, see Configure kubectl for Amazon EKS.
Solution 3:
Things have gotten a bit simpler over time. To get started on Linux (or indeed WSL) you will need to:
-
Install the AWS CLI and configure valid AWS CLI credentials (
aws configure
or e.g. use AWS SSO to generate time-limited credentials on the fly) - Install eksctl and kubectl
- Install aws-iam-authenticator
At this point, assuming you already have a running Kubernetes Cluster in your AWS account you can generate/update the kube configuration in $HOME/.kube/config with this one command:
aws eks update-kubeconfig --name test
Where test
is your cluster name according to the AWS Console (or aws eks list-clusters
).
You can now run for instance kubectl get svc
without getting an error.