How to Add Users to Kubernetes (kubectl)?
Solution 1:
For a full overview on Authentication, refer to the official Kubernetes docs on Authentication and Authorization
For users, ideally you use an Identity provider for Kubernetes (OpenID Connect).
If you are on GKE / ACS you integrate with respective Identity and Access Management frameworks
If you self-host kubernetes (which is the case when you use kops), you may use coreos/dex to integrate with LDAP / OAuth2 identity providers - a good reference is this detailed 2 part SSO for Kubernetes article.
kops (1.10+) now has built-in authentication support which eases the integration with AWS IAM as identity provider if you're on AWS.
for Dex there are a few open source cli clients as follows:
- Nordstrom/kubelogin
- pusher/k8s-auth-example
If you are looking for a quick and easy (not most secure and easy to manage in the long run) way to get started, you may abuse serviceaccounts
- with 2 options for specialised Policies to control access. (see below)
NOTE since 1.6 Role Based Access Control is strongly recommended! this answer does not cover RBAC setup
EDIT: Great, but outdated (2017-2018), guide by Bitnami on User setup with RBAC is also available.
Steps to enable service account access are (depending on if your cluster configuration includes RBAC or ABAC policies, these accounts may have full Admin rights!):
EDIT: Here is a bash script to automate Service Account creation - see below steps
-
Create service account for user
Alice
kubectl create sa alice
-
Get related secret
secret=$(kubectl get sa alice -o json | jq -r .secrets[].name)
-
Get
ca.crt
from secret (using OSXbase64
with-D
flag for decode)kubectl get secret $secret -o json | jq -r '.data["ca.crt"]' | base64 -D > ca.crt
-
Get service account token from secret
user_token=$(kubectl get secret $secret -o json | jq -r '.data["token"]' | base64 -D)
-
Get information from your kubectl config (current-context, server..)
# get current context c=$(kubectl config current-context) # get cluster name of context name=$(kubectl config get-contexts $c | awk '{print $3}' | tail -n 1) # get endpoint of current context endpoint=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"$name\")].cluster.server}")
-
On a fresh machine, follow these steps (given the
ca.cert
and$endpoint
information retrieved above:-
Install
kubectl
brew install kubectl
-
Set cluster (run in directory where
ca.crt
is stored)kubectl config set-cluster cluster-staging \ --embed-certs=true \ --server=$endpoint \ --certificate-authority=./ca.crt
-
Set user credentials
kubectl config set-credentials alice-staging --token=$user_token
-
Define the combination of alice user with the staging cluster
kubectl config set-context alice-staging \ --cluster=cluster-staging \ --user=alice-staging \ --namespace=alice
-
Switch current-context to
alice-staging
for the userkubectl config use-context alice-staging
-
To control user access with policies (using ABAC), you need to create a policy
file (for example):
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "system:serviceaccount:default:alice",
"namespace": "default",
"resource": "*",
"readonly": true
}
}
Provision this policy.json
on every master node and add --authorization-mode=ABAC --authorization-policy-file=/path/to/policy.json
flags to API servers
This would allow Alice (through her service account) read only rights to all resources in default namespace only.
Solution 2:
You say :
I need to enable other users to also administer.
But according to the documentation
Normal users are assumed to be managed by an outside, independent service. An admin distributing private keys, a user store like Keystone or Google Accounts, even a file with a list of usernames and passwords. In this regard, Kubernetes does not have objects which represent normal user accounts. Regular users cannot be added to a cluster through an API call.
You have to use a third party tool for this.
== Edit ==
One solution could be to manually create a user entry in the kubeconfig file. From the documentation :
# create kubeconfig entry
$ kubectl config set-cluster $CLUSTER_NICK \
--server=https://1.1.1.1 \
--certificate-authority=/path/to/apiserver/ca_file \
--embed-certs=true \
# Or if tls not needed, replace --certificate-authority and --embed-certs with
--insecure-skip-tls-verify=true \
--kubeconfig=/path/to/standalone/.kube/config
# create user entry
$ kubectl config set-credentials $USER_NICK \
# bearer token credentials, generated on kube master
--token=$token \
# use either username|password or token, not both
--username=$username \
--password=$password \
--client-certificate=/path/to/crt_file \
--client-key=/path/to/key_file \
--embed-certs=true \
--kubeconfig=/path/to/standalone/.kube/config
# create context entry
$ kubectl config set-context $CONTEXT_NAME \
--cluster=$CLUSTER_NICK \
--user=$USER_NICK \
--kubeconfig=/path/to/standalone/.kube/config