kubeadm token create fails on self signed ca cert

I am trying to deploy a k8s cluster using kubespray on top of an openstack cluster of ubuntu servers. The install fails when kubeadm tries to init the cloud provider by submitting a post request to the keystone endpoint xxx:5000/v3/ to create the bootstrap token. The kubelet.service fails to start because the keystone endpoint is signed by a self-signed cert. See below. I saved the ca cert from the keystone endpoint and placed it on the master node in /etc/kubernetes/ssl/ where kubelet and kubeadm look for certificates. I have also updated /etc/kubernetes/kubeadm-config.yaml Based on the documentation here and here, I have updated the kubeadm join-default configuration to include 'unsafeSkipCAVerification: true', but the kubelet.service still fails on the self signed cert. The kubeadm should be authenticating via the username/password that is stored in the /etc/kubernetes/cloud_config file, and I have verified that those values are correct. I am not sure where else to look to change the behavior. Any guidance would be greatly appreciated.

ubuntu:/etc/kubernetes# kubeadm config print join-defaults
apiVersion: kubeadm.k8s.io/v1beta3
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
  bootstrapToken:
  apiServerEndpoint: kube-apiserver:6443
  token: abcdef.0123456789abcdef
  unsafeSkipCAVerification: true
  timeout: 5m0s
  tlsBootstrapToken: abcdef.0123456789abcdef
kind: JoinConfiguration
  nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: mdap-node-01
  taints: null

kubelet stack trace:

 Dec 15 22:19:51 ubuntu kubelet[388780]: E1215 22:19:51.760564  388780 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: could not init cloud provider \"openstack\": Post \"https://XXX.XXX.XXX.132:5000/v3/auth/tokens\": x509: certificate signed by unknown authority"
 Dec 15 22:19:51 ubuntu systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE


FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (4 retries left).Result was: {
"attempts": 2,
"changed": false,
"cmd": [
    "/usr/local/bin/kubeadm",
    "--kubeconfig",
    "/etc/kubernetes/admin.conf",
    "token",
    "create"
],
"delta": "0:01:15.035670",
"end": "2021-12-16 15:03:22.901080",
"invocation": {
    "module_args": {
        "_raw_params": "/usr/local/bin/kubeadm --kubeconfig /etc/kubernetes/admin.conf token create",
        "_uses_shell": false,
        "argv": null,
        "chdir": null,
        "creates": null,
        "executable": null,
        "removes": null,
        "stdin": null
        "stdin_add_newline": true,
        "strip_empty_ends": true,
        "warn": true
    }
},
"msg": "non-zero return code",
"rc": 1,
"retries": 6,
"start": "2021-12-16 15:02:07.865410",
"stderr": "timed out waiting for the condition\nTo see the stack trace of this error execute with --v=5 or higher",
"stderr_lines": [
    "timed out waiting for the condition",
    "To see the stack trace of this error execute with --v=5 or higher"
],
"stdout": "",
"stdout_lines": []

Solution 1:

To clarify I am posting community Wiki answer.

To solve this issue you removed the openstack cloud provider settings. After that with kubespray you were able to install the k8s cluster successfully.

To read about the cert - as I mentioned before the documentation about Certificate Management is under this link. To check if the certificate is externally managed you can use following command:

kubeadm certs check-expiration