Delete kubernetes from Ubuntu server

Important Considerations Before You Start:

  • Data Loss: This process will likely delete all Kubernetes resources (pods, deployments, services, etc.) and the associated container images. Back up any critical data before proceeding. This is especially important if you have persistent volumes.
  • Order of Operations: If you have a multi-node Kubernetes cluster, it’s generally best to drain and remove nodes before uninstalling kubeadm and associated components from the control plane node(s).
  • Networking: Removing Kubernetes can affect your server’s networking configuration. Be prepared to troubleshoot network issues that might arise after the removal.
  • Automation: Consider automating these steps with a script if you’re managing multiple servers.

General Steps (Adapt Based on Your Installation Method):

  1. Drain and Remove Nodes (If Applicable – Multi-Node Cluster):

    • Drain the Node: This safely evicts pods from the node. Replace <node_name> with the actual name of the node. Run this command on a control plane node.

      kubectl drain <node_name> --delete-local-data --force --ignore-daemonsets
      
      • --delete-local-data: Deletes pods that use local storage (e.g., emptyDir volumes). WARNING: This will cause data loss if you haven’t backed up data stored in local volumes.
      • --force: Forces eviction even if there are issues.
      • --ignore-daemonsets: Ignores DaemonSets (pods that run on every node). DaemonSets are typically for system services and can often be safely ignored.
    • Uncordon the Node: Make the node eligible for scheduling again (after draining).

      kubectl uncordon <node_name>
      
    • Remove the Node: Delete the node from the Kubernetes cluster. Run on a control plane node.

      kubectl delete node <node_name>
      
    • Repeat these steps for each worker node in your cluster.

  2. Resetting Kubeadm (On All Nodes):

    • Use kubeadm reset to clean up Kubernetes components installed by kubeadm. Run this on every node in the cluster (including the control plane nodes).

      sudo kubeadm reset
      
      • If you encounter errors related to network interfaces, try adding the --force flag:

        sudo kubeadm reset --force
        
      • This will stop Kubernetes processes, remove kubelet configurations, and attempt to clean up network configurations.

  3. Removing Kubernetes Packages (On All Nodes):

    • Remove the Kubernetes packages. Use apt to purge the packages.

      sudo apt-get purge kubeadm kubelet kubectl kubernetes-cni -y
      sudo apt-get autoremove -y
      
  4. Cleaning up Network Configuration (On All Nodes):

    • Remove network interfaces created by Kubernetes (usually cni0 and flannel.1, but it depends on your CNI).

      sudo ip link del cni0
      sudo ip link del flannel.1
      
      • You might need to adapt these commands if you’re using a different CNI plugin (e.g., Calico, Weave Net). Consult the CNI plugin’s documentation for removal instructions.
      • Check for other Kubernetes-related network interfaces using ip addr and delete them accordingly.
    • Remove the /etc/cni/net.d directory. This directory contains CNI configuration files.

      sudo rm -rf /etc/cni/net.d
      
  5. Cleaning up Docker/Containerd (On All Nodes):

    • Remove Kubernetes-related Docker images: (If you’re using Docker as the container runtime)

      sudo docker stop $(sudo docker ps -aq) # Stop all containers
      sudo docker rm $(sudo docker ps -aq)   # Remove all containers
      sudo docker rmi $(sudo docker images -q)  # Remove all images
      
    • Restart Docker: (If you’re using Docker as the container runtime)

      sudo systemctl restart docker
      
    • Containerd (If using containerd as runtime): If you are using containerd, then use this:

      sudo systemctl stop containerd
      sudo ctr namespace rm k8s.io
      sudo rm -rf /var/lib/containerd
      sudo systemctl start containerd
      
  6. Removing kubelet configuration files (On all nodes):

    sudo rm -rf /etc/kubernetes/
    sudo rm -rf /var/lib/kubelet/
    
  7. Cleaning up iptables Rules (On All Nodes):

    • Kubernetes modifies iptables rules for networking. Flushing iptables will remove these rules. WARNING: This can affect other services running on the server. Only do this if you’re confident it won’t break anything else.

      sudo iptables -F
      sudo iptables -t nat -F
      sudo iptables -t mangle -F
      sudo iptables -X
      
    • Consider saving your existing iptables rules before flushing them so you can restore them if necessary.

    • If you use firewalld, consider disabling it or reconfiguring it after removing Kubernetes. Kubernetes often interacts with firewalld.

  8. Removing the Kubernetes Configuration Directory (On All Nodes):

    rm -rf $HOME/.kube
    
  9. Reboot (Optional but Recommended):

    • Rebooting the server can help ensure that all Kubernetes processes are stopped and that any lingering configurations are cleared.

      sudo reboot
      

Specific Considerations for Different Kubernetes Distributions:

  • Minikube:

    • Use the minikube delete command:

      minikube delete
      
    • This will remove the Minikube VM and associated files. You may also want to remove the ~/.minikube directory.

  • K3s:

    • Run the K3s uninstall script:

      sudo /usr/local/bin/k3s-uninstall.sh
      
    • Then, remove the K3s binaries and data directories:

      sudo rm -rf /usr/local/bin/k3s /var/lib/rancher/k3s /etc/systemd/system/k3s.service /etc/systemd/system/k3s.service.env
      sudo systemctl daemon-reload
      sudo systemctl reset-failed
      
  • MicroK8s:

    • Use the microk8s stop and microk8s reset commands:

      sudo microk8s stop
      sudo microk8s reset
      
    • Then remove the microk8s snap package

      sudo snap remove microk8s
      
  • Rancher Kubernetes Engine (RKE):

    • RKE typically manages Kubernetes clusters on separate VMs or bare metal servers. The removal process depends on how you provisioned the infrastructure. You’ll need to deprovision the VMs/servers using your cloud provider or bare-metal management tools.
    • Remove the cluster.yml file that you used to create the cluster.
    • If you used a state file (e.g., kube_config_rke.yml), delete it.

Verifying Removal:

  • Check if Kubernetes processes are still running:

    ps aux | grep kube
    ps aux | grep etcd
    
  • Verify that the Kubernetes binaries (kubeadm, kubelet, kubectl) are no longer in your PATH:

    which kubeadm
    which kubelet
    which kubectl
    
  • Check for Kubernetes-related Docker containers or images (if applicable).

  • Inspect your server’s network configuration (ip addr, iptables -L) to ensure that Kubernetes-related networking artifacts have been removed.

Important Notes:

  • CNI Plugin Removal: The steps for removing the CNI plugin (Calico, Weave Net, Flannel, etc.) can vary. Consult the documentation for your specific CNI plugin for detailed removal instructions.
  • etcd: If you’re running a single-node Kubernetes cluster or using a local etcd instance, kubeadm reset should remove the etcd data directory. However, if you have a separate etcd cluster, you’ll need to manage its removal separately.
  • Root Privileges: Most of these commands require sudo because they modify system-level configurations.
  • Error Handling: Pay close attention to any errors that occur during the removal process. Search for solutions online or consult the Kubernetes documentation for troubleshooting tips.
  • Customizations: If you’ve made any custom configurations to your Kubernetes installation (e.g., modified systemd units, added custom scripts), be sure to remove those as well.

By following these steps and adapting them to your specific Kubernetes setup, you should be able to completely remove Kubernetes from your Ubuntu server. Remember to back up any important data before you start!

Leave a Comment

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Scroll to Top