Kubernetes cluster issue troubleshooting, certs renewal

Got some issue with kubernetes cluster recently, said: “Unable to connect to the server: tls: failed to verify certificate: x509: certificate has expired or is not yet valid”, so need a certs renew to fix this issue.

Issue symptom

Got some issue with kubernetes cluster recently, said: "Unable to connect to the server: tls: failed to verify certificate: x509: certificate has expired or is not yet valid", so need a certs renew to fix this issue.

Solution

Manually renew, run below command

sudo kubeadm certs renew

You should get few options available, list as below

Available Commands:
  admin.conf               Renew the certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself
  all                      Renew all available certificates
  apiserver                Renew the certificate for serving the Kubernetes API
  apiserver-etcd-client    Renew the certificate the apiserver uses to access etcd
  apiserver-kubelet-client Renew the certificate for the API server to connect to kubelet
  controller-manager.conf  Renew the certificate embedded in the kubeconfig file for the controller manager to use
  etcd-healthcheck-client  Renew the certificate for liveness probes to healthcheck etcd
  etcd-peer                Renew the certificate for etcd nodes to communicate with each other
  etcd-server              Renew the certificate for serving etcd
  front-proxy-client       Renew the certificate for the front proxy client
  scheduler.conf           Renew the certificate embedded in the kubeconfig file for the scheduler manager to use

I’m going to renew all available certificates, so run below command

sudo kubeadm certs renew all

Example output

[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.

Above output suggests to restart couple of components, so follow below command to restart them on master node

sudo systemctl restart kubelet

When try to run `kubectl get pods` commands, got below errors

E0608 17:19:43.165987 1596311 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
E0608 17:19:43.167567 1596311 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
E0608 17:19:43.168820 1596311 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
E0608 17:19:43.170025 1596311 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
E0608 17:19:43.171261 1596311 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
error: You must be logged in to the server (the server has asked for the client to provide credentials)

Checked kubernetes official doc, have below information

Clusters built with kubeadm often copy the admin.conf certificate into $HOME/.kube/config, as instructed in Creating a cluster with kubeadm. On such a system, to update the contents of $HOME/.kube/config after renewing the admin.conf, you must run the following commands:

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal

So try to run below commands

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Then, run `kubectl get pods` again, and voila~, see my pods again

NAME                                      READY   STATUS    RESTARTS   AGE
go-platform-5667f475c5-xdxz4              1/1     Running   0          3d20h

Reference

Leave a Comment

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Scroll to Top