How to run kubectl commands inside a container?
I would use kubernetes api, you just need to install curl, instead of kubectl
and the rest is restful.
curl http://localhost:8080/api/v1/namespaces/default/pods
Im running above command on one of my apiservers. Change the localhost to apiserver ip address/dns name.
Depending on your configuration you may need to use ssl or provide client certificate.
In order to find api endpoints, you can use --v=8
with kubectl
.
example:
kubectl get pods --v=8
Resources:
Kubernetes API documentation
Update for RBAC:
I assume you already configured rbac, created a service account for your pod and run using it. This service account should have list permissions on pods in required namespace. In order to do that, you need to create a role and role binding for that service account.
Every container in a cluster is populated with a token that can be used for authenticating to the API server. To verify, Inside the container run:
cat /var/run/secrets/kubernetes.io/serviceaccount/token
To make request to apiserver, inside the container run:
curl -ik \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods
Bit late to the party here, but this is my two cents:
I've found using kubectl
within a container much easier than calling the cluster's api
(Why? Auto authentication!)
Say you're deploying a Node.js project that needs kubectl
usage.
- Download & Build
kubectl
inside the container - Build your application, copying
kubectl
to your container -
Voila!
kubectl
provides a rich cli for managing your kubernetes cluster
Helpful documentation
--- EDITS ---
After working with kubectl
in my cluster pods, I found a more effective way to authenticate pods to be able to make k8s API calls. This method provides stricter authentication.
- Create a
ServiceAccount
for your pod, and configure your pod to use said account. k8s Service Account docs - Configure a
RoleBinding
orClusterRoleBinding
to allow services to have the authorization to communicate with the k8s API. k8s Role Binding docs - Call the API directly, or use a the k8s-client to manage API calls for you. I HIGHLY recommend using the client, it has automatic configuration for pods which removes the authentication token step required with normal requests.
When you're done, you will have the following:
ServiceAccount
, ClusterRoleBinding
, Deployment
(your pods)
Feel free to comment if you need some clearer direction, I'll try to help out as much as I can :)
All-in-on example
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: k8s-101
spec:
replicas: 3
template:
metadata:
labels:
app: k8s-101
spec:
serviceAccountName: k8s-101-role
containers:
- name: k8s-101
imagePullPolicy: Always
image: salathielgenese/k8s-101
ports:
- name: app
containerPort: 3000
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-101-role
subjects:
- kind: ServiceAccount
name: k8s-101-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-101-role
The salathielgenese/k8s-101
image contains kubectl
. So one can just log into a pod container & execute kubectl
as if he was running it on k8s host: kubectl exec -it pod-container-id -- kubectl get pods