Unlocking High Availability: Exploring the Power of Kube-VIP in Kubernetes

Unlocking High Availability: Exploring the Power of Kube-VIP in Kubernetes

Introduction

Kube-VIP, or Kubernetes Virtual IP, is a project that provides high availability for Kubernetes services. It does so by managing a virtual IP (VIP) that floats between nodes in a Kubernetes cluster, ensuring that if a node hosting a service goes down, the IP address associated with that service is quickly reassigned to another healthy node. This helps in maintaining uninterrupted access to the service.

Here's a high-level overview of the architecture of Kube-VIP:

Components:

VIP Manager: This component is responsible for managing the virtual IP address. It monitors the health of nodes and services in the cluster and reassigns the virtual IP as necessary.

Health Check Monitor: Monitors the health of nodes and services. It checks the health status of nodes and services using various mechanisms like HTTP probes, TCP probes, etc.

Leader Election: To ensure that only one instance of the VIP manager is active at any given time, Kube-VIP utilizes leader election mechanisms provided by Kubernetes or an external consensus mechanism like etcd.

IPVS or IPTables: Kube-VIP can utilize either IPVS (IP Virtual Server) or IPTables to manage network traffic and route it to the appropriate backend pods.

Operation:

Initialization: During initialization, Kube-VIP is deployed on each node in the Kubernetes cluster. It configures the network interfaces to manage the virtual IP. Monitoring: The health check monitor continuously monitors the health of nodes and services by periodically sending probes. Failover: If a node or service becomes unhealthy, the VIP manager initiates failover by reassigning the virtual IP to a healthy node. This is done by updating the ARP (Address Resolution Protocol) tables or IPTables rules to redirect traffic to the new node. Recovery: Once the failed node or service is restored to a healthy state, the VIP manager can revert the changes and return the virtual IP to its original state.

Integration with Kubernetes: Kube-VIP integrates seamlessly with Kubernetes using custom resources and controllers. It leverages Kubernetes primitives to manage the virtual IP address and ensure high availability of services. Customization: Kube-VIP allows for customization of various parameters such as health check intervals, failure detection thresholds, and failover strategies to suit the specific requirements of the cluster.

In This Blog I am going to discuss how can we use kube-vip to configure Kubernetes in HA Mode with 2 master node and 3 worker node.

My Infrastructure enter image description here Login to One Control node

root@devmaster1:~# export VIP=172.17.17.110
root@devmaster1:~#export INTERFACE=vboxnet0(Interface where your node IP is assigned)
root@devmaster1:~# KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")
root@devmaster1:~# alias kube-vip="ctr image pull ghcr.io/kube-vip/kube-vip:$KVVERSION; ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip"

Now Run the below command to generate a kube-vip.yaml file

root@devmaster1:~#kube-vip manifest pod \
--interface $INTERFACE \
--address $VIP \
--controlplane \
--services \
--arp \
--leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml

kube-vip.yaml file will get store in /etc/kubernetes/manifests/ and when we will run kubeadm command, kubelet will spin up a static pod.

Now let's run kubeadm command to install the cluster

root@devmaster1:~# kubeadm init --control-plane-endpoint="172.17.17.110:6443" --upload-certs --apiserver-advertise-address=172.17.17.1 --pod-network-cidr=192.168.0.0/16 

I0213 20:22:14.645006 26476 version.go:256] remote version is much newer: v1.29.1; falling back to: stable-1.26 [init] Using Kubernetes version: v1.26.13 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [devmaster1.homecluster.store kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.17.1 172.17.17.110] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [devmaster1.homecluster.store localhost] and IPs [172.17.17.1 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [devmaster1.homecluster.store localhost] and IPs [172.17.17.1 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 6.565268 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 3de2839a68ee0b02f4d947ba7a8d361595847fb9b449b3a311742cda620c6723 [mark-control-plane] Marking the node devmaster1.homecluster.store as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node devmaster1.homecluster.store as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: z9ci87.eb8vkv02assothnx [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root:

Save the Generated Join commands to Join the Second Master and Worker Nodes to cluster

Export the kubeconfig file and verify the devmaster1 cluster status.

root@devmaster1:~# export KUBECONFIG=/etc/kubernetes/admin.conf
root@devmaster1:~# kubectl get nodes
NAME                           STATUS     ROLES           AGE   VERSION
devmaster1.homecluster.store   Ready      control-plane   39m   v1.26.0

Now Lets Login to devmaster2 node and run the Join Command

root@devmaster2:~# kubeadm join 172.17.17.110:6443 --token xxxxxxx
--discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxx  --control-plane
--certificate-key xxxxxxxxxxxxxxxxxxxxxxx --apiserver-advertise-
address=172.17.17.100

[preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki" [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [devmaster2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.17.100 172.17.17.110] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [devmaster2 localhost] and IPs [172.17.17.100 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [devmaster2 localhost] and IPs [172.17.17.100 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [etcd] Announced new etcd member joining to the existing etcd cluster [etcd] Creating static Pod manifest for "etcd" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation [mark-control-plane] Marking the node devmaster2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node devmaster2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster.

Login Back to devmaster1 and run kubectl get nodes command

root@devmaster1:~# kubectl get nodes
NAME                           STATUS     ROLES           AGE   VERSION
devmaster1.homecluster.store   Ready      control-plane   48m   v1.26.0
devmaster2.homecluster.store   NotReady   control-plane   41m   v1.26.0

Lets install Calico CNI to make devmaster2 node to ready state.

root@devmaster1:~# kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f https://docs.projectcalico.org/v3.15/manifests/calico.yaml

configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created

Verify the Status

root@devmaster1:~# kubectl get nodes
NAME                           STATUS     ROLES           AGE   VERSION
devmaster1.homecluster.store   Ready      control-plane   48m   v1.26.0
devmaster2.homecluster.store  Ready      control-plane   41m   v1.26.0  

Login on all 3 Worker Nodes and run the Join Command

root@devworker1:~# kubeadm join 172.17.17.110:6443 --token xxxxxxxxxxxxxx \
 --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxx

[preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Login to devmaster1.homecluster.store and verify

root@devmaster1:~# kubectl get nodes
NAME                           STATUS   ROLES           AGE     VERSION
devmaster1.homecluster.store   Ready    control-plane   117m    v1.26.0
devmaster2.homecluster.store   Ready    control-plane   110m    v1.26.0
devworker1.homecluster.store   Ready    <none>          5m10s   v1.26.0
devworker2.homecluster.store   Ready    <none>          3m      v1.26.0
devworker3.homecluster.store   Ready    <none>          1m40s   v1.26.0

We can see now cluster is up and running with 2 Master Node and 3 Worker Nodes and listening over VIP 172.17.17.100