Kubernetes Credential Recovery
The credentials to access a Kubernetes cluster with kubectl are contained in the kubeconfig file, typically located at ~/.kube/config .
What if this file were lost due to a crashed hard drive, accidental deletion, or other cause? How do you recover your kubeconfig file?
Usually there will be at least two ways to recover the file:
– by manually reconstructing it piece by piece
– with a cloud-provider shortcut that generates the entire file
Table of Contents:
EKS
GKE
AKS
kops
microk8s
minikube
kubeadm
kubespray
kube-aws
openshift
rancher
EKS
apiVersion: v1 clusters: - cluster: certificate-authority-data: <CA-CERT> server: <API-SERVER-ENDPOINT> name: <CLUSTERNAME> contexts: - context: cluster: <CLUSTERNAME> user: <USERNAME> name: <USERNAME> current-context: <USERNAME> kind: Config preferences: {} users: - name: <USERNAME> user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - eks - get-token - --cluster-name - <CLUSTERNAME> - --region - <REGION> command: aws env: null
Manual Procedure:
Copy the above text to ~/.kube/config and substitute the variables:
<CA-CERT> from aws console
<API-SERVER-ENDPOINT> from aws console
<CLUSTERNAME> from aws console
<REGION> from aws console
<USERNAME> arbitrary value
Recommended Solution:
Either of these commands will produce the kubeconfig file:
aws eks --region <REGION> update-kubeconfig --name <CLUSTERNAME> #or eksctl utils write-kubeconfig --cluster <CLUSTERNAME>
GKE
apiVersion: v1 clusters: - cluster: certificate-authority-data: <CA-CERT> server: https://<CLUSTER-IP> name: <CLUSTERNAME> contexts: - context: cluster: <CLUSTERNAME> user: <USERNAME> name: <CLUSTERNAME> current-context: <CLUSTERNAME> kind: Config preferences: {} users: - name: <USERNAME> user: auth-provider: config: access-token: <ACCESS-TOKEN> cmd-args: config config-helper --format=json cmd-path: /google/google-cloud-sdk/bin/gcloud expiry: <EXPIRY> expiry-key: '{.credential.token_expiry}' token-key: '{.credential.access_token}' name: gcp
Manual Procedure:
Copy the above text to ~/.kube/config and substitute the variables:
<CA-CERT> This can be found in the GCP Console. Cluster->Details->Endpoint->”Show cluster certificate”. That cert should be base64 encoded before placing it in the config. The command would be “base64 -w 0 <ca-file>”
<CLUSTER-IP> This can be found in the GCP Console. Cluster->Details->Endpoint-> IP.
<CLUSTERNAME> arbitrary. May be named anything.
<USERNAME> arbitrary. May be named anything.
<EXPIRY> in the format “2020-01-16T16:23:48Z”
<ACCESS-TOKEN> Can be found as a field in the result of this command “/google/google-cloud-sdk/bin/gcloud config config-helper –format=json”. However, you may omit the entire line “access-token: <ACCESS-TOKEN>” and it will be regenerated automatically.
Recommended Solution:
There’s no need to compose the kubeconfig from scratch, it should be generated with the following command.
gcloud container clusters get-credentials <CLUSTERNAME> --zone <ZONE> --project <PROJECT>
That command itself can be found in the GCP Console next to the cluster name. Click on “connect”.
More Information:
GKE is using an in-tree plugin with the auth-provider format shown here.
AKS
apiVersion: v1 clusters: - cluster: certificate-authority-data: <CA-CERT> server: https://<API-SERVER>:443 name: <CLUSTERNAME> contexts: - context: cluster: <CLUSTERNAME> user: <USERNAME> name: <CLUSTERNAME> current-context: <CLUSTERNAME> kind: Config preferences: {} users: - name: <USERNAME> user: client-certificate-data: <CLIENT-CERT> client-key-data: <CLIENT-KEY> token: <CLIENT-TOKEN>
Manual Procedure:
Copy the above text to ~/.kube/config and substitute the variables:
<CA-CERT>
<API-SERVER> In the Azure Console, Cluster->Properties->API Server Address
<USERNAME> clusterUser_<RESOURCEGROUP>_<CLUSTERNAME>
<CLUSTERNAME> from Azure Console
<CLIENT-CERT>
<CLIENT-KEY>
<CLIENT-TOKEN>
Because the Azure master node is not accessible, it won’t be possible to retrieve the necessary components to piece together a kubeconfig. The Azure CLI will be required. (see next step).
Recommended Solution:
az aks get-credentials --resource-group <RESOURCEGROUP> --name <CLUSTERNAME>
More Info:
The authentication is done with <CLIENT-CERT> and <CLIENT-KEY>. Removing <CLIENT-TOKEN> did not block connection to the cluster.
kops
apiVersion: v1 clusters: - cluster: certificate-authority-data: <CA-CERT> server: <API-SERVER> name: <CLUSTERNAME> contexts: - context: cluster: <CLUSTERNAME> user: <USERNAME> name: <CLUSTERNAME> current-context: <CLUSTERNAME> kind: Config preferences: {} users: - name: <USER-NAME> user: client-certificate-data: <CLIENT-CERT> client-key-data: <CLIENT-CERT-KEY> password: <PASSWORD> username: <USERNAME>
Manual Procedure:
Copy the above text to ~/.kube/config and substitute the variables.
The “source” of the info which is getting plugged in comes from the api-server, not the local workstation. That means you should ssh into the api-server for most of these steps.
<CA-CERT>
base64 -w 0 /srv/kubernetes/ca.crt > ca.crt.base64
The contents of ca.crt.base64 is <CA-CERT>
<API-SERVER>
Append “api” to the beginning of the cluster name: https://api.cluster.example.com
<USERNAME>
cat /srv/kubernetes/basic_auth.csv . The second field. Most likely “admin”, and may be arbitrarily “admin” if setting things up from scratch.
<CLIENT-CERT> and <CLIENT-CERT-KEY>
run this script.
KUBECERTDIR=/srv/kubernetes CLIENTDIR=client1 mkdir -p $KUBECERTDIR/$CLIENTDIR cd $KUBECERTDIR/$CLIENTDIR openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key cd .. openssl x509 -req -days 3600 -in client1/CSR.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client1/client.crt -sha256 cd client1 base64 -w 0 client.crt > base64client.txt base64 -w 0 privateKey.key > base64key.txt echo "The files $KUBECERTDIR/$CLIENTDIR/base64client.txt and $KUBECERTDIR/$CLIENTDIR/base64key.txt contain <CLIENT-CERT> and <CLIENT-KEY>"
<CLIENT-PASSWORD> and <USER-NAME>
These values must correspond with fields one and two of /srv/kubernetes/basic_auth.csv. They can be set on both ends, as long as they match.
Either certs or username/password should work. Either/or.
Recommended Solution:
Don’t create the kubeconfig from scratch. Instead, use these commands.
kops get cluster kops export kubecfg --name <CLUSTERNAME>
microk8s
apiVersion: v1 clusters: - cluster: certificate-authority-data: <CA-CERT> server: https://127.0.0.1:16443 name: microk8s-cluster contexts: - context: cluster: microk8s-cluster user: <USERNAME> name: microk8s current-context: microk8s kind: Config preferences: {} users: - name: <USERNAME> user: password: <PASSWORD> username: <USERNAME>
Manual Procedure:
Copy the above text to ~/.kube/config and substitute the variables.
<USERNAME> from -basic-auth-file=/var/snap/microk8s/current/credentials/basic_auth.csv, typically “admin”
<PASSWORD> from -basic-auth-file=/var/snap/microk8s/current/credentials/basic_auth.csv
<CA-CERT> from /var/snap/microk8s/current/certs/ca.crt
Recommended Solution:
The command microk8s.kubectl is already preconfigured, and doesn’t need to be set up. Run “microk8s.kubectl”.
To automatically dump the config so that standard “kubectl” will find it:
sudo microk8s.kubectl config view --raw > $HOME/.kube/config
minikube
apiVersion: v1 clusters: - cluster: certificate-authority: <CA-CERT> server: https://<SERVER-IP>:8443 name: minikube contexts: - context: cluster: <CLUSTERNAME> user: <USERNAME> name: <CLUSTERNAME> current-context: <CLUSTERNAME> kind: Config preferences: {} users: - name: <USERNAME> user: client-certificate: <CLIENTCERT> client-key: <CLIENTKEY>
Manual Procedure:
Copy the above text to ~/.kube/config and substitute the variables:
<CA-CERT> found in ~/.minikube/ca.crt locally, or /var/lib/minikube/certs/ca.crt on the VM. The field should look like this: “certificate-authority: /home/myuser/.minikube/ca.crt”
<SERVER-IP> run this command “virsh net-dhcp-leases minikube-net”
<CLUSTERNAME> arbitrary, usually use “minikube”
<USERNAME> arbitrary, usually use “minikube”
<CLIENTCERT> found in ~/.minikube/client.crt locally. The field should look like this: “client-certificate: /home/myuser/.minikube/client.crt”
<CLIENTKEY> found in ~/.minikube/client.key locally. The field should look like this: “client-key: /home/myuser/.minikube/client.key”
You could use a technique similar to the kubespray section to generate new client certs and keys. SSH into the node with “minikube ssh”. The certs are in /var/lib/minikube/certs/
Recommended Method:
It’s not necessary to piece together the kubeconfig. Just run “minikube start” and it will regenerate the file.
kubeadm
apiVersion: v1 clusters: - cluster: certificate-authority-data: <CA-CERT> server: https://<CLUSTERIP>:6443 name: <CLUSTERNAME> contexts: - context: cluster: <CLUSTERNAME> user: <USERNAME> name: <USERNAME>@<CLUSTERNAME> current-context: <USERNAME>@<CLUSTERNAME> kind: Config preferences: {} users: - name: <USERNAME> user: client-certificate-data: <CLIENT-CERT> client-key-data: <CLIENT-KEY>
Manual Procedure:
Copy the above text to ~/.kube/config and substitute the variables. The source files can be found on the master node of the cluster.
<CA-CERT>
found in /etc/kubernetes/pki/ca.crt . Needs to be base64 encoded.
Run this script.
#!/bin/bash CAPATH=/etc/kubernetes/pki/ca.crt base64 -w 0 $CAPATH > ca.crt.base64 #The file ca.crt.base64 now contains the CA-CERT
<CLUSTERIP> IP address of any master node
<CLUSTERNAME> “kubernetes” (or any other name)
<USERNAME> “kubernetes-admin” (or any other name)
<CLIENT-CERT> and <CLIENT-KEY>
Run this script:
#!/bin/bash #set these answers during the cert creation #Organization Name (eg, company) [Internet Widgits Pty Ltd]:system:masters #Common Name (e.g. server FQDN or YOUR name) []:admin KUBECERTDIR=/etc/kubernetes/pki CLIENTDIR=client1 mkdir -p $KUBECERTDIR/$CLIENTDIR cd $KUBECERTDIR/$CLIENTDIR openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key cd .. openssl x509 -req -days 3600 -in $CLIENTDIR/CSR.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out $CLIENTDIR/client.crt -sha256 cd $CLIENTDIR base64 -w 0 client.crt > base64client.txt base64 -w 0 privateKey.key > base64key.txt echo "The files $KUBECERTDIR/$CLIENTDIR/base64client.txt and $KUBECERTDIR/$CLIENTDIR/base64key.txt contain <CLIENT-CERT> and <CLIENT-KEY>"h
Recommended Solution:
On the master node where kubeadm was installed, the file /etc/kubernetes/admin.conf is the kubeconfig file. Copy that to ~/.kube/config.
Or, a new config could be generated, as follows:
kubeadm alpha kubeconfig user --client-name=admin2 --org=system:masters > admin2.conf cp admin2.conf ~/.kube/config
kubespray
apiVersion: v1 clusters: - cluster: certificate-authority-data: <CA-CERT> server: https://<API-SERVER>:6443 name: <CLUSTERNAME> contexts: - context: cluster: <CLUSTERNAME> user: <USERNAME> name: <CONTEXTNAME> current-context: <CONTEXTNAME> kind: Config preferences: {} users: - name: <USERNAME> user: client-certificate-data: <CLIENT-CERT> client-key-data: <CLIENT-KEY>
Manual Procedure:
Copy the above text to ~/.kube/config and substitute the variables. The source files can be found on any master node of the cluster.
<CA-CERT>
Run this script.
#!/bin/bash CAPATH=/etc/kubernetes/ssl/ca.crt base64 -w 0 $CAPATH > ca.crt.base64 #The file ca.crt.base64 contains the CA-CERT
<CLUSTERNAME> arbitrary
<USERNAME> arbitrary
<CONTEXTNAME> arbitrary
<API-SERVER> IP address of master node
<CLIENT-CERT> and <CLIENT-KEY>
Run this script.
#!/bin/bash #set these answers during the cert creation #Organization Name (eg, company) [Internet Widgits Pty Ltd]:system:masters #Common Name (e.g. server FQDN or YOUR name) []:admin KUBECERTDIR=/etc/kubernetes/ssl CLIENTDIR=client1 mkdir -p $KUBECERTDIR/$CLIENTDIR cd $KUBECERTDIR/$CLIENTDIR openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key cd .. openssl x509 -req -days 3600 -in $CLIENTDIR/CSR.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out $CLIENTDIR/client.crt -sha256 cd $CLIENTDIR base64 -w 0 client.crt > base64client.txt base64 -w 0 privateKey.key > base64key.txt echo "The files $KUBECERTDIR/$CLIENTDIR/base64client.txt and $KUBECERTDIR/$CLIENTDIR/base64key.txt contain <CLIENT-CERT> and <CLIENT-KEY>"
Recommended Solution:
– kubespray creates the kubeconfig file on all master nodes /root/.kube/config
– if the kubeconfig is missing, kubespray recreates it. Re-run kubspray.
kube-aws
apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: <CA-CERT> server: <APISERVER> name: <CLUSTERNAME> contexts: - context: cluster: <CLUSTERNAME> namespace: default user: <USERNAME> name: <CLUSTERNAME> users: - name: <USERNAME> user: client-certificate: <CLIENTCERT> client-key: <CLIENTKEY> current-context: kube-aws-cluster1-context
Manual Procedure:
Copy the above text to ~/.kube/config and substitute the variables.
<CA-CERT> The text “credentials/ca.pem” (not the contents of that file)
<APISERVER> The api server that you have configured such as “https://api.example.com”. Check route53 DNS for CNAME pointing to ELB, or even use the ELB directly.
<CLUSTERNAME> arbitrary
<USERNAME> arbitrary
<CLIENTCERT> The text “credentials/admin.pem” (not the contents of that file)
<CLIENTKEY> The text “credentials/admin-key.pem” (not the contents of that file)
When generating a cluster with kube-aws, the credentials <CA-CERT>, <CLIENTCERT> and <CLIENTKEY> are all present locally on the workstation that originally ran the “kube-aws render” command, in a “credentials” directory. If this isn’t available, the control-plane-kube-aws-controller server has a copy of the certs in the /etc/kubernetes/ssl directory.
Recommended Solution:
If you have access to the original directory where the cluster was created, the command “kube-aws render stack” will regenerate the kubeconfig, if that particular file happens to be missing. Copy it to your home directory for easier usage:
cp kubeconfig ~/.kube/config cp -rp credentials ~/.kube/
openshift
apiVersion: v1 clusters: - cluster: certificate-authority-data: <CA-CERT> server: https://<CLUSTERURL>:6443 name: <CLUSTERNAME> contexts: - context: cluster: <CLUSTERNAME> user: <USERNAME> name: <USERNAME> current-context: <USERNAME> kind: Config preferences: {} users: - name: <USERNAME> user: client-certificate-data: <CLIENT-CERT> client-key-data: <CLIENT-KEY>
Manual Procedure:
Copy the above text to ~/.kube/config and substitute the variables:
<CA-CERT>
On the master server, search for the file “ca-bundle.crt” in /etc/kubernetes. For example /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-4/configmaps/serviceaccount-ca/ca-bundle.crt
It should be base64 encoded:
#!/bin/bash CAPATH=/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-4/configmaps/serviceaccount-ca/ca-bundle.crt base64 -w 0 $CAPATH > ca.crt.base64 #The file ca.crt.base64 now contains the CA-CERT
<CLUSTERURL> in the format api.<domainname>, such as “api.cluster1.example.com”, found in Route53 where a new zone would have been created.
<CLUSTERNAME> arbitrary. Might be “cluster5”, etc.
<USERNAME> arbitrary. Could be “admin”.
<CLIENT-CERT> and <CLIENT-KEY>
In the directory where ./openshift_install ran, there is a file .openshift_install_state.json. Search in this file for admin-kubeconfig-signer.crt and admin-kubeconfig-signer.key. Copy them out to their own files. Base64 decode them. Place the decoded files into the tls subdirectory, as tls/admin-kubeconfig-signer.crt and tls/admin-kubeconfig-signer.key. Then run this script.
CACERT=admin-kubeconfig-signer.crt CAKEY=admin-kubeconfig-signer.key KUBECERTDIR=/home/user/go/src/github.com/openshift/installer-4.2/cluster1/tls CLIENTDIR=client1 mkdir -p $KUBECERTDIR/$CLIENTDIR cd $KUBECERTDIR/$CLIENTDIR openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key cd .. openssl x509 -req -days 3600 -in client1/CSR.csr -CA $CACERT -CAkey $CAKEY -CAcreateserial -out client1/client.crt -sha256 cd client1 base64 -w 0 client.crt > base64client.txt base64 -w 0 privateKey.key > base64key.txt echo "The files $KUBECERTDIR/$CLIENTDIR/base64client.txt and $KUBECERTDIR/$CLIENTDIR/base64key.txt containand "
Change KUBECERTDIR in the above script to be the location where the installation took place.
Alternative file:
apiVersion: v1 clusters: - cluster: certificate-authority-data: <CA-CERT> server: https://<CLUSTERURL>:6443 name: <CLUSTERNAME> contexts: - context: cluster: <CLUSTERNAME> namespace: default user: kube:admin/<CLUSTERNAME> name: default/<CLUSTERNAME>/<USERNAME> current-context: default/<CLUSTERNAME>/<USERNAME> kind: Config preferences: {} users: - name: <USERNAME>/<CLUSTERNAME> user: token: <TOKEN>
Manual Procedure:
Copy the above text to ~/.kube/config and substitute the variables:
<CA-CERT>
On the master server, search for the file “ca-bundle.crt” in /etc/kubernetes. For example /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-4/configmaps/serviceaccount-ca/ca-bundle.crt . It should be base64 encoded:
#!/bin/bash CAPATH=/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-4/configmaps/serviceaccount-ca/ca-bundle.crt base64 -w 0 $CAPATH > ca.crt.base64 #The file ca.crt.base64 now contains the CA-CERT
<CLUSTERURL> in the format api.<domainname>, such as “api.cluster1.example.com”, found in Route53 where a new zone would have been created.
<CLUSTERNAME> arbitrary. Could be “api-cluster1-example-com:6443”
<USERNAME> arbitrary. Could be “kube:admin”.
<TOKEN> if you already have access to oc or kubectl, then “kubectl get OAuthAccessToken” will show the value as the “name” for the token referring to “kube:admin” and “openshift-browser-client”. Or, dump the etcd database: “etcdctl get / –prefix”.
Recommended Solution:
Log into the Openshift Console. In the upper right corner, click on kube:admin -> Copy Login Command. Then click “Display Token”. Instructions will appear, such as:
oc login --token=vvo2EevU2UdKk-osVyZpuYAgGQPhPO69Yw4ICboXYZA --server=https://api.example.logchart.com:6443
Run that command, and it will generate the kubeconfig file in ~/.kube/config.
If errors appear about the absence of a CA cert, you can grab a copy of the CA cert, as described above, and place the CA cert into the kubeconfig file.
rancher
apiVersion: v1 kind: Config clusters: - name: "<CLUSTERNAME>" cluster: server: "https://<RANCHER>/k8s/clusters/<CLUSTERID>" certificate-authority-data: <CA-CERT-RANCHER> - name: "<CLUSTERNAME>-<NODEPOOLNAME>" cluster: server: "https://<CLUSTERIP>:6443" certificate-authority-data: <CA-CERT-NODE> users: - name: "<USERNAME>" user: token: "<TOKEN>" contexts: - name: "<CLUSTERNAME>" context: user: "<USERNAME>" cluster: "<CLUSTERNAME>" - name: "<CLUSTERNAME>-<NODEPOOLNAME>" context: user: "<CLUSTERNAME>" cluster: "<CLUSTERNAME>-<NODEPOOLNAME>" current-context: "<CLUSTERNAME>"
Manual Procedure:
Copy the above text to ~/.kube/config and substitute the variables.
<CLUSTERNAME> This is the human readable text name of the cluster, like “cluster1”. Arbitrary value.
<RANCHER> the domain name where rancher was installed, such as “rancher2.example.com”
<CLUSTERID> In the rancher web console, the value can be found in the URL string such as “c-zh2lh”. If you directly connect to the rancher container, and run “kubectl get ns”, that cluster name is also a namespace.
<CLUSTERIP> The public IP address of a control node in the member cluster.
<CA-CERT-RANCHER> On an installed node in a cluster, this can be found in /etc/kubernetes/ssl/certs/serverca . Should be base64 encoded for the kubeconfig.
base64 -w 0 /etc/kubernetes/ssl/certs/serverca
On the rancher server itself, this can be found with “kubectl get settings cacerts -o yaml”
<CA-CERT-NODE> On an installed node in a cluster, found in /etc/kubernetes/ssl/kube-ca.pem . Should be base64 encoded for the kubeconfig.
base64 -w 0 /etc/kubernetes/ssl/kube-ca.pem
<USERNAME> Arbitrary. The cluster name is generally used.
<NODEPOOLNAME> The node pools can be found in Rancher->clusters->nodes. In the kubeconfig this may be set to anything.
<TOKEN> This can be found with this series of steps. First, inside the rancher container, run “kubectl get token” to see all tokens. Run “kubectl get token kubeconfig-user-<USERID>.<CLUSTERID> -o yaml” to get the specific token info. Finally, to form the <TOKEN> needed in the kubeconfig, format it is “<metadata.name>:<token>” , where <metadata.name> and <token> are from that kubectl query.
Recommended Solution:
In the Rancher Web Console, go to Cluster->Kubeconfig file (which is a button on the main page of the Dashboard). This allows you to download the Kubeconfig file.