AWS EKS update-kubeconfig does not respect --role-arn flag

A couple of suggestions that may, or may not help:

  • You may include --verbose to your command to perhaps get better details as to where it fails. Could it be that case that the user you are authenticated as are not able to assume the role specified?

  • In the manual for aws-cli --role-arn is passed as a string, you should try to encapsulate it with double-quotes:

aws eks update-kubeconfig --name eks-cluster --role-arn "arn:aws:iam::999999999999:role/eksServiceRole"

  • Try to manually assume the role through aws-cli.

    1. Verify your current authenticated session: aws sts get-caller-identity

    2. Attempt to assume the role: aws sts assume-role --role-arn "arn:aws:iam::999999999999:role/eksServiceRole" --role-session-name test-eks-role


--role-arn is the role which will be used by aws-iam-authenticator when you run kubectl to get a token and is only injected in to the generated config; it is not used for fetching EKS resources in any way by the command.

The error you are hitting is because the AWS credentials you're using to run the update-kubeconfig command don't have permissions to describe that cluster.