Automating Kubernetes Installation with Ansible

Automating Kubernetes Installation with Ansible

In today's world of containerized applications, Kubernetes has emerged as the de facto standard for container orchestration. Deploying and managing Kubernetes clusters manually can be complex and time-consuming. Thankfully, automation tools like Ansible provide a powerful way to automate the installation and configuration of Kubernetes clusters.

In this blog post, we'll explore how to use Ansible to automate the installation of a Kubernetes cluster. We'll walk through the steps involved in setting up Ansible, creating playbooks, and deploying Kubernetes on a set of target servers.

SETUP

Ansible Node ansible.mylabserver.com

Master Node master.mylabserver.com

Worker Nodes worker1.mylabserver.com

worker2.mylabserver.com

Prerequisite

1) Ansible must be installed on all the nodes

2) Set Password less authentication between Ansible Node to Master and Worker Nodes

Lets Create Playbook

Create ansible.cfg

root@06432e7f921c:~#mkdir k8s-install
root@06432e7f921c:~#cd k8s-install
root@06432e7f921c:~/k8s-install#cat ansible.cfg
[defaults]
inventory=./inventory
deprecation_warnings=False

Create Inventory File

root@06432e7f921c:~/k8s-install# cat inventory
[master]
master.mylabserver.com

[worker]
worker1.mylabserver.com
worker2.mylabserver.com

Verify they are accessible from Ansible Node

root@06432e7f921c:~/k8s-install# ansible -i inventory -m ping all
worker1.mylabserver.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
worker2.mylabserver.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
master.mylabserver.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}

Create main.yaml file

root@06432e7f921c:~/k8s-install# cat main.yaml
---
- hosts: [master,worker]
  gather_facts: false
  roles:
     - role: k8s-config
       tags: common

- hosts: [master]
  gather_facts: false
  roles:
   - role: master-config
     tags: master

- hosts: [worker]
  gather_facts: false
  roles:
    - role: worker-config
      tags: worker

Create Role

root@06432e7f921c:~/k8s-install# ansible-galaxy init k8s-config
root@06432e7f921c:~/k8s-install# ansible-galaxy init master-config
root@06432e7f921c:~/k8s-install# ansible-galaxy init worker-config

Create Playbook for Common Installation

root@06432e7f921c:~/k8s-install# cat roles/k8s-config/tasks/main.yml
---
- name: Disable swap
  command: swapoff -a
  ignore_errors: yes

- name: Remove swap entry from /etc/fstab
  lineinfile:
    path: /etc/fstab
    state: absent
    regexp: '^.*swap.*$'

- name: Stop firewalld service
  service:
    name: ufw
    state: stopped
    enabled: no

- name: Ensure overlay and br_netfilter are present in containerd.conf
  lineinfile:
    path: /etc/modules-load.d/containerd.conf
    line: "{{ item }}"
    create: yes
  loop:
    - overlay
    - br_netfilter

- name: Load Kernel modules
  modprobe:
     name: "{{ item }}"
  loop:
     - overlay
     - br_netfilter

- name: Ensure kernel settings are present in kubernetes.conf
  lineinfile:
    path: /etc/sysctl.d/kubernetes.conf
    line: "{{ item }}"
    create: yes
  loop:
    - "net.bridge.bridge-nf-call-ip6tables = 1"
    - "net.bridge.bridge-nf-call-iptables = 1"
    - "net.ipv4.ip_forward = 1"

- name: Apply kernel settings
  command: sysctl --system

- name: Set DEBIAN_FRONTEND environment variable
  shell:
    cmd: export DEBIAN_FRONTEND=noninteractive

- name: Add Kubernetes GPG key
  apt_key:
     url: "https://packages.cloud.google.com/apt/doc/apt-key.gpg"
     state: present

- name: Update apt repositories
  apt:
   update_cache: yes

- name: Install required packages
  apt:
     name: "{{ item }}"
     state: present
  loop:
    - apt-transport-https
    - ca-certificates
    - curl
    - gnupg
    - lsb-release

- name: Ensure /etc/apt/keyrings directory exists
  file:
    path: /etc/apt/keyrings
    state: directory
    mode: '0755'

- name: Check if Docker repository file exists
  stat:
    path: /etc/apt/sources.list.d/docker.list
  register: docker_repo_file

- name: Add Docker repository using echo
  shell:
    cmd: 'echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null'
  when: docker_repo_file.stat.exists == false

- name: Run apt update
  apt:
    update_cache: yes


- name: Install containerd.io package
  apt:
    name: containerd.io
    state: present


- name: Generate default containerd configuration
  command:
     cmd: containerd config default > /etc/containerd/config.toml

- name: Set SystemdCgroup to true in containerd configuration
  replace:
     path: /etc/containerd/config.toml
     regexp: 'SystemdCgroup\s*=\s*false'
     replace: 'SystemdCgroup = true'

- name: Restart containerd service
  service:
    name: containerd
    state: restarted

- name: Enable containerd service
  service:
     name: containerd
     enabled: yes

- name: Download Kubernetes GPG key
  shell:
    cmd: curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg --yes

- name: Add Kubernetes repository
  copy:
     content: |
        deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /
     dest: /etc/apt/sources.list.d/kubernetes.list

- name: Update apt repositories
  apt:
    update_cache: yes

- name: Install Kubernetes components
  apt:
    name: "{{ item }}"
    state: present
  loop:
      - kubeadm
      - kubelet
      - kubectl

Create Playbook to Create Cluster

root@06432e7f921c:~/k8s-install# cat roles/master-config/tasks/main.yml
---
- name: Initialize Kubernetes Cluster
  shell:
     cmd: "kubeadm init --apiserver-advertise-address=172.31.120.93 --pod-network-cidr=192.168.0.0/16 >> /root/kubeinit.log 2>/dev/null"
     creates: /root/.kube/config

- name: Deploy Calico network
  shell:
    cmd: "kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml && kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml"
    creates: /etc/kubernetes/admin.conf

- name: Generate and save cluster join command to /joincluster.sh
  shell:
    cmd: "kubeadm token create --print-join-command > /joincluster.sh"
    creates: /joincluster.sh

Create Playbook for worker node

root@06432e7f921c:~/k8s-install# cat roles/worker-config/tasks/main.yml
---
- name: Copy joincluster script to worker nodes
  copy:
    src: /joincluster.sh
    dest: /joincluster.sh
    mode: 0755

- name: Execute joincluster.sh script
  shell:
     cmd: /joincluster.sh

Install Common packages on all the 3 nodes

root@06432e7f921c:~/k8s-install# ansible-playbook main.yaml --tags common

PLAY [master,worker] ************************************************************************************************************************************************************************

TASK [k8s-config : Disable swap] ************************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Remove swap entry from /etc/fstab] ***************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Stop firewalld service] **************************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Ensure overlay and br_netfilter are present in containerd.conf] **********************************************************************************************************
ok: [worker1.mylabserver.com] => (item=overlay)
ok: [master.mylabserver.com] => (item=overlay)
ok: [worker2.mylabserver.com] => (item=overlay)
ok: [worker1.mylabserver.com] => (item=br_netfilter)
ok: [master.mylabserver.com] => (item=br_netfilter)
ok: [worker2.mylabserver.com] => (item=br_netfilter)

TASK [k8s-config : Load Kernel modules] *****************************************************************************************************************************************************
ok: [worker1.mylabserver.com] => (item=overlay)
ok: [master.mylabserver.com] => (item=overlay)
ok: [worker2.mylabserver.com] => (item=overlay)
ok: [worker1.mylabserver.com] => (item=br_netfilter)
ok: [master.mylabserver.com] => (item=br_netfilter)
ok: [worker2.mylabserver.com] => (item=br_netfilter)

TASK [k8s-config : Ensure kernel settings are present in kubernetes.conf] *******************************************************************************************************************
ok: [worker1.mylabserver.com] => (item=net.bridge.bridge-nf-call-ip6tables = 1)
ok: [worker2.mylabserver.com] => (item=net.bridge.bridge-nf-call-ip6tables = 1)
ok: [master.mylabserver.com] => (item=net.bridge.bridge-nf-call-ip6tables = 1)
ok: [worker1.mylabserver.com] => (item=net.bridge.bridge-nf-call-iptables = 1)
ok: [worker2.mylabserver.com] => (item=net.bridge.bridge-nf-call-iptables = 1)
ok: [master.mylabserver.com] => (item=net.bridge.bridge-nf-call-iptables = 1)
ok: [worker1.mylabserver.com] => (item=net.ipv4.ip_forward = 1)
ok: [worker2.mylabserver.com] => (item=net.ipv4.ip_forward = 1)
ok: [master.mylabserver.com] => (item=net.ipv4.ip_forward = 1)

TASK [k8s-config : Apply kernel settings] ***************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Set DEBIAN_FRONTEND environment variable] ********************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Add Kubernetes GPG key] **************************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Update apt repositories] *************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Install required packages] ***********************************************************************************************************************************************
ok: [worker1.mylabserver.com] => (item=apt-transport-https)
ok: [master.mylabserver.com] => (item=apt-transport-https)
ok: [worker2.mylabserver.com] => (item=apt-transport-https)
ok: [worker1.mylabserver.com] => (item=ca-certificates)
ok: [master.mylabserver.com] => (item=ca-certificates)
ok: [worker2.mylabserver.com] => (item=ca-certificates)
ok: [worker1.mylabserver.com] => (item=curl)
ok: [master.mylabserver.com] => (item=curl)
ok: [worker1.mylabserver.com] => (item=gnupg)
ok: [worker2.mylabserver.com] => (item=curl)
ok: [master.mylabserver.com] => (item=gnupg)
ok: [worker1.mylabserver.com] => (item=lsb-release)
ok: [worker2.mylabserver.com] => (item=gnupg)
ok: [master.mylabserver.com] => (item=lsb-release)
ok: [worker2.mylabserver.com] => (item=lsb-release)

TASK [k8s-config : Ensure /etc/apt/keyrings directory exists] *******************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [worker2.mylabserver.com]
ok: [master.mylabserver.com]

TASK [k8s-config : Check if Docker repository file exists] **********************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Add Docker repository using echo] ****************************************************************************************************************************************
skipping: [master.mylabserver.com]
skipping: [worker2.mylabserver.com]
skipping: [worker1.mylabserver.com]

TASK [k8s-config : Run apt update] **********************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [worker2.mylabserver.com]
changed: [master.mylabserver.com]

TASK [k8s-config : Install containerd.io package] *******************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [worker2.mylabserver.com]
changed: [master.mylabserver.com]

TASK [k8s-config : Generate default containerd configuration] *******************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Set SystemdCgroup to true in containerd configuration] *******************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [worker2.mylabserver.com]
ok: [master.mylabserver.com]

TASK [k8s-config : Restart containerd service] **********************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Enable containerd service] ***********************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Download Kubernetes GPG key] *********************************************************************************************************************************************
[WARNING]: Consider using the get_url or uri module rather than running 'curl'.  If you need to use command because get_url or uri is insufficient you can add 'warn: false' to this command
task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]
changed: [worker1.mylabserver.com]

TASK [k8s-config : Add Kubernetes repository] ***********************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Update apt repositories] *************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [worker2.mylabserver.com]
changed: [master.mylabserver.com]

TASK [k8s-config : Install Kubernetes components] *******************************************************************************************************************************************
ok: [worker1.mylabserver.com] => (item=kubeadm)
ok: [master.mylabserver.com] => (item=kubeadm)
ok: [worker2.mylabserver.com] => (item=kubeadm)
ok: [worker1.mylabserver.com] => (item=kubelet)
ok: [master.mylabserver.com] => (item=kubelet)
ok: [worker2.mylabserver.com] => (item=kubelet)
ok: [worker1.mylabserver.com] => (item=kubectl)
ok: [worker2.mylabserver.com] => (item=kubectl)
ok: [master.mylabserver.com] => (item=kubectl)

Install Cluster On Master Node

root@06432e7f921c:~/k8s-install# ansible-playbook main.yaml --tags master
PLAY [master] *******************************************************************************************************************************************************************************

TASK [master-config : Initialize Kubernetes Cluster] ****************************************************************************************************************************************
changed: [master.mylabserver.com]

TASK [master-config : Deploy Calico network] ************************************************************************************************************************************************
changed: [master.mylabserver.com]

TASK [master-config : Generate and save cluster join command to /joincluster.sh] ************************************************************************************************************
changed: [master.mylabserver.com]

PLAY RECAP **********************************************************************************************************************************************************************************
master.mylabserver.com     : ok=26   changed=12   unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

Install Cluster On Worker Node

root@06432e7f921c:~/k8s-install# ansible-playbook main.yaml --tags worker

PLAY [worker] *******************************************************************************************************************************************************************************

TASK [worker-config : Copy joincluster script to worker nodes] ******************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [worker-config : Execute joincluster.sh script] ****************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [worker2.mylabserver.com]

PLAY RECAP **********************************************************************************************************************************************************************************
worker1.mylabserver.com    : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
worker2.mylabserver.com    : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Create a .kube file and copy admin to .kube dir

root@06432e7f921c:~/k8s-install# mkdir /root/.kube
root@06432e7f921c:~/k8s-install# cp /etc/kubernetes/admin.conf /root/.kube/config

Lets Verify Cluster Configured properly

root@06432e7f921c:~/k8s-install# kubectl get nodes
NAME                      STATUS   ROLES           AGE   VERSION
master.mylabserver.com    Ready    control-plane   54m   v1.29.3
worker1.mylabserver.com   Ready    <none>          43m   v1.29.3
worker2.mylabserver.com   Ready    <none>          43m   v1.29.3

Conclusion Automating the installation of Kubernetes using Ansible can significantly simplify the process and reduce the chance of errors. With Ansible playbooks, we can quickly provision and configure Kubernetes clusters on any infrastructure, whether it's on-premises or in the cloud.

In this blog post, we've only scratched the surface of what's possible with Ansible. As you become more familiar with Ansible and Kubernetes, you can further customize your playbooks to meet your specific requirements and integrate additional automation tasks.

Happy automating!

Understanding Kubernetes Security with KubeArmor: Enhancing Container Protection

Understanding Kubernetes Security with KubeArmor: Enhancing Container Protection

Introduction to KubeArmor KubeArmor is an open-source security solution designed specifically for Kubernetes environments. Developed by AhnLab, a leading cybersecurity company, KubeArmor provides fine-grained container-aware security policies to enforce access control, system call filtering, and network policies.

Architecture Overview

enter image description here

The Need for Kubernetes Security As organizations increasingly adopt Kubernetes for their containerized workloads, ensuring the security of these environments becomes crucial. Traditional security measures are often inadequate in containerized environments due to their dynamic nature and the large attack surface they present. KubeArmor addresses these challenges by providing granular security controls tailored to Kubernetes deployments.

Key Features of KubeArmor

Container-Aware Security Policies: KubeArmor allows administrators to define security policies at the container level, ensuring that each workload operates within its designated security context.

System Call Filtering: By intercepting and filtering system calls made by containers, KubeArmor can prevent unauthorized actions and enforce security policies in real-time.

Network Policies: KubeArmor enables administrators to define network policies to control inbound and outbound traffic between containers, helping to prevent lateral movement and unauthorized access.

Audit Logging: KubeArmor logs all security-related events, providing administrators with visibility into container activities and potential security threats.

How KubeArmor Works KubeArmor operates by deploying an agent as a DaemonSet within the Kubernetes cluster. This agent intercepts system calls made by containers and enforces security policies defined by the administrator. By leveraging the Linux Security Module (LSM) framework, KubeArmor integrates seamlessly with the underlying operating system, ensuring minimal performance overhead.

Install kubearmor

root@master:~# curl -sfL http://get.kubearmor.io/ | sudo sh -s -- -b /usr/local/bin
kubearmor/kubearmor-client info checking GitHub for latest tag
kubearmor/kubearmor-client info found version: 1.2.1 for v1.2.1/linux/amd64
kubearmor/kubearmor-client info installed /usr/local/bin/karmor
root@master:~# karmor install
πŸ›‘       Installed helm release : kubearmor-operator
πŸ˜„      KubeArmorConfig created
⌚️      This may take a couple of minutes
ℹ️       Waiting for KubeArmor Snitch to run: β€”

Verify the Installation

root@master:~# kubectl get all -n kubearmor
NAME                                            READY   STATUS    RESTARTS   AGE
pod/kubearmor-apparmor-containerd-98c2c-6k2mc   1/1     Running   0          44m
pod/kubearmor-apparmor-containerd-98c2c-hhgww   1/1     Running   0          3m19s
pod/kubearmor-apparmor-containerd-98c2c-m2974   1/1     Running   0          45m
pod/kubearmor-controller-575b4b46c5-jnppw       2/2     Running   0          3m10s
pod/kubearmor-operator-5bcfb76b4f-8kkwp         1/1     Running   0          47m
pod/kubearmor-relay-6b59fbf77f-wqp7r            1/1     Running   0          46m

NAME                                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
service/kubearmor                              ClusterIP   10.97.110.240    <none>        32767/TCP   46m
service/kubearmor-controller-metrics-service   ClusterIP   10.103.151.164   <none>        8443/TCP    46m
service/kubearmor-controller-webhook-service   ClusterIP   10.111.41.204    <none>        443/TCP     46m

NAME                                                 DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                                                                                                                                                            AGE
daemonset.apps/kubearmor-apparmor-containerd-98c2c   3         3         3       3            3           kubearmor.io/btf=yes,kubearmor.io/enforcer=apparmor,kubearmor.io/runtime=containerd,kubearmor.io/seccomp=yes,kubearmor.io/socket=run_containerd_containerd.sock,kubernetes.io/os=linux   45m

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kubearmor-controller   1/1     1            1           46m
deployment.apps/kubearmor-operator     1/1     1            1           47m
deployment.apps/kubearmor-relay        1/1     1            1           46m

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/kubearmor-controller-575b4b46c5   1         1         1       3m10s
replicaset.apps/kubearmor-controller-64b5b9d54b   0         0         0       46m
replicaset.apps/kubearmor-operator-5bcfb76b4f     1         1         1       47m
replicaset.apps/kubearmor-relay-6b59fbf77f        1         1         1       46m

Deploy nginx app in default namespace

root@master:~# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

root@master:~# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-748c667d99-qjjxn   1/1     Running   0          34s
root@master:~# POD=$(kubectl get pod -l app=nginx -o name)
root@master:~# echo $POD
pod/nginx-748c667d99-qjjxn

Use Case: Deny Installation of package Management Tool Creating a policy to deny the execution

root@master:~# cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: block-pkg-mgmt-tools-exec
spec:
  selector:
    matchLabels:
      app: nginx
  process:
    matchPaths:
    - path: /usr/bin/apt
    - path: /usr/bin/apt-get
  action:
    Block
EOF
kubearmorpolicy.security.kubearmor.com/block-pkg-mgmt-tools-exec created

Lets Verify installing package

root@master:~# kubectl exec -it $POD -- bash -c "apt update && apt install masscan"
bash: line 1: /usr/bin/apt: Permission denied
command terminated with exit code 126

Use Case: Deny Access to Service Token To mitigate the risk posed by default service account tokens being mounted in every pod, even if not utilized by applications, you can restructure your Kubernetes cluster's service account setup. Policy to restrict access

root@master:~# cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: block-service-access-token-access
spec:
  selector:
    matchLabels:
      app: nginx
  file:
    matchDirectories:
    - dir: /run/secrets/kubernetes.io/serviceaccount/
      recursive: true
  action:
    Block
EOF
kubearmorpolicy.security.kubearmor.com/block-service-access-token-access created

Verify the service token access

root@nginx-748c667d99-qjjxn:/# kubectl exec -it $POD -- bash
root@nginx-748c667d99-qjjxn:/# curl https://$KUBERNETES_PORT_443_TCP_ADDR/api --insecure --header "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)"
cat: /run/secrets/kubernetes.io/serviceaccount/token: Permission denied
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/api\"",
  "reason": "Forbidden",
  "details": {},
  "code": 403

Use Case: Least Permission Policy

root@master:~# kubectl annotate ns default kubearmor-file-posture=block --overwrite
namespace/default annotated
root@master:~# cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: only-allow-nginx-exec
spec:
  selector:
    matchLabels:
      app: nginx
  file:
    matchDirectories:
    - dir: /
      recursive: true
  process:
    matchPaths:
    - path: /usr/sbin/nginx
    - path: /bin/bash
  action:
    Allow
EOF
kubearmorpolicy.security.kubearmor.com/only-allow-nginx-exec created

Verify the Access

root@master:~# kubectl exec -it $POD -- bash -c "chroot"
bash: line 1: /usr/sbin/chroot: Permission denied
command terminated with exit code 126

Benefits of Using KubeArmor Enhanced Security Posture: By enforcing fine-grained security policies at the container level, KubeArmor reduces the attack surface and mitigates the risk of unauthorized access and data breaches.

Compliance and Auditing: KubeArmor's audit logging capabilities help organizations demonstrate compliance with regulatory requirements and industry standards by providing detailed records of security-related events.

Operational Efficiency: With KubeArmor, administrators can centrally manage security policies across their Kubernetes clusters, streamlining security operations and reducing manual overhead.

Conclusion In today's threat landscape, securing Kubernetes environments is paramount for organizations seeking to harness the benefits of containerization without compromising on security. KubeArmor offers a robust solution for enhancing Kubernetes security by providing fine-grained security controls tailored to containerized workloads. By leveraging KubeArmor's capabilities, organizations can mitigate security risks, achieve compliance, and ensure the integrity of their Kubernetes deployments.

Simplifying TLS Certificate Management in Kubernetes with Cert-manager and Vault

Simplifying TLS Certificate Management in Kubernetes with Cert-manager and Vault

Introduction

Cert-manager creates TLS certificates for workloads in your Kubernetes or OpenShift cluster and renews the certificates before they expire. Cert-manager can obtain certificates from a variety of certificate authorities, including: Let's Encrypt, HashiCorp Vault, Venafi and private PKI.

In this blog I am using Hashicorp Vault as Certificate Issuer with Cert-Manager

Login to Vault( I am running vault in my cluster)

root@master:~/vault# kubectl exec -it vault-0 -- /bin/sh

Enable the PKI secrets engine at its default path.

/ $ vault secrets enable pki
Success! Enabled the pki secrets engine at: pki/

Configure the max lease time-to-live

/ $ vault secrets tune -max-lease-ttl=8760h pki
Success! Tuned the secrets engine at: pki/

Generate a self-signed certificate

/ $ vault write pki/root/generate/internal \
>     common_name=arobyte.tech \
>     ttl=8760h
Key              Value
---              -----
certificate      -----BEGIN CERTIFICATE-----
MIIDODCCAiCgAwIBAgIUekLUNWVLV3am8DTRk33Y9KX0t8kwDQYJKoZIhvcNAQEL
BQAwFzEVMBMGA1UEAxMMYXJvYnl0ZS50ZWNoMB4XDTI0MDMxMzE1NDU0NVoXDTI1
MDMxMzE1NDYxNVowFzEVMBMGA1UEAxMMYXJvYnl0ZS50ZWNoMIIBIjANBgkqhkiG

-----END CERTIFICATE-----
expiration       1741880775
issuing_ca       -----BEGIN CERTIFICATE-----
MIIDODCCAiCgAwIBAgIUekLUNWVLV3am8DTRk33Y9KX0t8kwDQYJKoZIhvcNAQEL
BQAwFzEVMBMGA1UEAxMMYXJvYnl0ZS50ZWNoMB4XDTI0MDMxMzE1NDU0NVoXDTI1
zw4bj+X2hQyMqu5QHdFF4n58s9I9M5oq9IIBlMqxQqQdN79UirJc/LTk71roOKi7
PD1A3HmuNnWt04+0f8maI9txbUToWq15t8d5zBoM85sF2AGc04OmQmXvL+cGqImJ
9+RIo+iKIJnLiAMt
-----END CERTIFICATE-----
serial_number    7a:42:d4:35:65:4b:57:76:a6:f0:34:d1:93:7d:d8:f4:a5:f4:b7:c9

Configure the PKI secrets engine certificate issuing and certificate revocation list (CRL) endpoints to use the Vault service in the default namespace.

/ $ vault write pki/config/urls \
>     issuing_certificates="http://vault.vault.svc.cluster.local:8200/v1/pki/ca" \
>     crl_distribution_points="http://vault.vault.svc.cluster.local:8200/v1/pki/crl"
Success! Data written to: pki/config/urls

Configure a role named arobyte-role-tech that enables the creation of certificates robyte.tech domain with any subdomains.

/ $ vault write pki/roles/arobyte-role-tech \
>     allowed_domains=arobyte.tech \
>     allow_subdomains=true \
>     max_ttl=72h
Success! Data written to: pki/roles/arobyte-role-tech

Create a policy named pki that enables read access to the PKI secrets engine paths.

/ $ vault policy write pki - <<EOF
> path "pki*"                        { capabilities = ["read", "list"] }
> path "pki/sign/arobyte-role-com"    { capabilities = ["create", "update"] }
> path "pki/issue/arobyte-role-com"   { capabilities = ["create"] }
> EOF
Success! Uploaded policy: pki

Enable the Kubernetes authentication method.

/ $ vault auth enable kubernetes
Success! Enabled kubernetes auth method at: kubernetes/

Configure the Kubernetes authentication method to use location of the Kubernetes API.

/ $ vault write auth/kubernetes/config \
>     kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
Success! Data written to: auth/kubernetes/config

Create a Kubernetes authentication role named issuer that binds the pki policy with a Kubernetes service account named issuer

/ $ vault write auth/kubernetes/role/issuer \
>     bound_service_account_names=issuer \
>     bound_service_account_namespaces=default \
>     policies=pki \
>     ttl=20m
Success! Data written to: auth/kubernetes/role/issuer

Lets Install Cert-manager

root@master:~# kubectl create namespace cert-manager

Install Jetstack's cert-manager's version 1.12.3 resources.

root@master:~# kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.12.3/cert-manager.crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
root@master:~# helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories
root@master:~# helm repo update
...Successfully got an update from the "jetstack" chart repository
Update Complete. ⎈Happy Helming!⎈

Install the cert-manager

root@master:~# helm install cert-manager \
    --namespace cert-manager \
    --version v1.12.3 \
  jetstack/cert-manager

NAME: cert-manager
LAST DEPLOYED: Wed Mar 13 21:26:19 2024
NAMESPACE: cert-manager
STATUS: deployed

Check the status

root@master:~# kubectl get pods --namespace cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-65dfbdf7d6-qp5fk              1/1     Running   0          3m44s
cert-manager-cainjector-79f5dbffcf-lh4d6   1/1     Running   0          3m44s
cert-manager-webhook-77b984cc67-8nxhh      1/1     Running   0          3m44s

Create a service account named issuer within the default namespace.

root@master:~# kubectl create serviceaccount issuer
serviceaccount/issuer created

Create a secret definition

root@master:~# cat >> issuer-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: issuer-token
  annotations:
    kubernetes.io/service-account.name: issuer
type: kubernetes.io/service-account-token
EOF
root@master:~# kubectl apply -f issuer-secret.yaml
secret/issuer-token created
root@master:~# kubectl get secrets
NAME                 TYPE                                  DATA   AGE
issuer-token   kubernetes.io/service-account-token   3      7s

Define an Issuer, named vault-issuer, that sets Vault as a certificate issuer.

root@master:~# cat > vault-issuer.yaml <<EOF
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: vault-issuer
  namespace: default
spec:
  vault:
    server: http://vault.vault.svc.cluster.local:8200
    path: pki/sign/arobyte-role-tech
    auth:
      kubernetes:
        mountPath: /v1/auth/kubernetes
        role: issuer
        secretRef:
          name: issuer-token
          key: token
EOF
root@master:~# kubectl apply --filename vault-issuer.yaml
issuer.cert-manager.io/vault-issuer created

Create the arobyte-tech certificate

root@master:~# cat arobyte-tech-cert.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: arobyte-tech
  namespace: default
spec:
  secretName: arobyte-role-tech
  issuerRef:
    name: vault-issuer
  commonName: www.arobyte.tech
  dnsNames:
  - www.arobyte.tech
root@master:~# kubectl apply --filename arobyte-tech-cert.yaml

View the details of the arobyte-tech certificate

root@master:~# kubectl describe certificate.cert-manager arobyte-tech -n default
Name:         arobyte-tech
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  cert-manager.io/v1
Kind:         Certificate


Events:
  Type    Reason     Age    From                                       Message
  ----    ------     ----   ----                                       -------
  Normal  Issuing    2m55s  cert-manager-certificates-trigger          Issuing certificate as Secret does not exist
  Normal  Generated  2m55s  cert-manager-certificates-key-manager      Stored new private key in temporary Secret resource "arobyte-tech-gwkjp"
  Normal  Requested  2m54s  cert-manager-certificates-request-manager  Created new CertificateRequest resource "arobyte-tech-fdvjr"
  Normal  Issuing    27s    cert-manager-certificates-issuing          The certificate has been successfully issued


The certificate reports that it has been issued successfully.

Verify the Certificates

root@master:~# kubectl get certificate
NAME           READY   SECRET              AGE
arobyte-tech   True    arobyte-role-tech   95m


root@master:~# kubectl describe secrets arobyte-role-tech
Name:         arobyte-role-tech
Namespace:    default
Labels:       controller.cert-manager.io/fao=true
Annotations:  cert-manager.io/alt-names: www.arobyte.tech
              cert-manager.io/certificate-name: arobyte-tech
              cert-manager.io/common-name: www.arobyte.tech
              cert-manager.io/ip-sans:
              cert-manager.io/issuer-group:
              cert-manager.io/issuer-kind: Issuer
              cert-manager.io/issuer-name: vault-issuer
              cert-manager.io/uri-sans:

Type:  kubernetes.io/tls

Data
====
tls.key:  1675 bytes
ca.crt:   1176 bytes
tls.crt:  1419 bytes

root@master:~# kubectl describe certificate arobyte-tech
Name:         arobyte-tech
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  cert-manager.io/v1
Kind:         Certificate
Metadata:

Spec:
  Common Name:  www.arobyte.tech
  Dns Names:
    www.arobyte.tech
  Issuer Ref:
    Name:       vault-issuer
  Secret Name:  arobyte-role-tech
Status:
  Conditions:
    Last Transition Time:  2024-03-13T17:43:48Z
    Message:               Certificate is up to date and has not expired
    Observed Generation:   1
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2024-03-16T17:43:47Z
  Not Before:              2024-03-13T17:43:17Z
  Renewal Time:            2024-03-15T17:43:37Z
  Revision:                1
Events:                    <none>


Streamlining Kubernetes Configuration: A GitLab CI/CD Guide with Config-lint Validation

Streamlining Kubernetes Configuration: A GitLab CI/CD Guide with Config-lint Validation

Introduction

Config-lint is a powerful command-line tool designed to streamline the validation of Kubernetes configuration files. By leveraging rules specified in YAML, Config-lint ensures adherence to best practices, security standards, and custom policies. This tool is indispensable for integrating into Continuous Integration and Continuous Deployment (CI/CD) pipelines, allowing seamless validation of configuration changes before deployment.

Integrating Config-lint into CI/CD Pipelines

One of the key benefits of Config-lint is its seamless integration into CI/CD pipelines. By incorporating Config-lint as a step in your pipeline, you can automatically validate Kubernetes configuration files before deployment. This ensures that only compliant configurations are promoted to production environments, reducing the risk of misconfigurations and potential downtime.

Custom Rules with YAML

Config-lint allows users to define custom rules using YAML configuration files. This flexibility enables organizations to enforce specific standards and policies tailored to their environment. Whether it's enforcing naming conventions, resource limits, or security policies, Config-lint's YAML-based rules empower teams to maintain consistency and compliance across Kubernetes configurations.

Validating Helm Charts

In addition to standalone configuration files, Config-lint can also validate Helm charts. Helm is a popular package manager for Kubernetes, and ensuring the integrity of Helm charts is crucial for smooth deployments. With Config-lint, teams can validate Helm charts against predefined rules, ensuring that charts adhere to best practices and organizational standards.

Config-lint simplifies Kubernetes configuration validation by providing a flexible and intuitive toolset. By integrating Config-lint into CI/CD pipelines and leveraging custom YAML rules, organizations can ensure the reliability, security, and compliance of their Kubernetes deployments. With support for Helm charts validation, Config-lint offers a comprehensive solution for maintaining consistency and best practices across Kubernetes environments. Start using Config-lint today to streamline your Kubernetes configuration validation process and elevate your CI/CD workflows to the next level of efficiency and reliability.

Integrating Config-lint into Giltab CICD

1) Docker File for Creating Image which I Will use in Pipeline

root@master:~# cat Dockerfile
FROM ubuntu:latest
MAINTAINER omvedi25@gmail.com
ADD config-lint /usr/local/bin/
ADD helm /usr/local/bin/

2- Build the image and push it to Artifact

root@master:~# docker build -t omvedi25/config-lint:v1.1 .

root@master:~# docker push omvedi25/config-lint:v1.1

3- Create a gitlab-ci.yaml pipeline

enter image description here

4- Create a project for helm Chart

enter image description here

5- Lets Create rules.yaml file putting rules into it which will get validate before pushing the chart to chartmuseum.

version: 1
description: Rules for Kubernetes spec files
type: Kubernetes
files:
  - "*.yaml"
rules:
  - id: POD_RESOURCE_REQUESTS_LIMITS
    severity: FAILURE
    message: Containers in Pod must specify both resource requests and limits
    resource: Pod
    assertions:
      - key: spec.containers[*].resources.requests
        op: notPresent
      - key: spec.containers[*].resources.limits
        op: notPresent
    match: any
    tags:
      - pod

  - id: DEPLOYMENT_RESOURCE_REQUESTS_LIMITS
    severity: FAILURE
    message: Containers in Deployment must specify both resource requests and limits
    resource: Deployment
    assertions:
      - key: spec.template.spec.containers[*].resources.requests
        op: notPresent
      - key: spec.template.spec.containers[*].resources.limits
        op: notPresent
    match: all
    tags:
      - deployment

The above rule will check in the chart for deployment that resource request and limits are mentioned or not.

6- Create gitlab-ci.yaml to run the validation on the charts

---
include:
  - project: 'guilds1/cloud-native-guild/helm/tooling/helm-pipelines'
    file: '/.config-lint.yaml'
    ref: main
  - project: 'guilds1/cloud-native-guild/helm/tooling/helm-pipelines'
    file: '/.helm.yaml'
  - project: 'guilds1/cloud-native-guild/helm/tooling/helm-pipelines'
    file: '/.kind.yaml'
    ref: main

variables:
  CHART: ${CI_PROJECT_NAME}
  IMAGE_HELM_CHART_LINT: "quay.io/helmpack/chart-testing:v3.3.1"
  IMAGE_TEST_DOCS: "renaultdigital/helm-docs:v1.5.0"

stages:
  - pretest
  - validation
  - lint
  - test
  - build
  - make_release
  - publish
  - integration

7- Let's run the pipeline and validate the rules. enter image description here

**We can see rules are working as expected. We can write our custom rules according to requirement to validate the charts with mandatory options **

Other Examples

# wget https://github.com/stelligent/config-lint/blob/master/example-files/rules/kubernetes.yml

# cat kubernetes.yml

Maximizing CI/CD Efficiency: A Guide to GitLab Runner Setup as Selfhosted runner on Kubernetes

Maximizing CI/CD Efficiency: A Guide to GitLab Runner Setup as Selfhosted runner on Kubernetes

Introduction

enter image description here

GitLab Runner is an open-source project that works in conjunction with GitLab CI/CD pipelines to automate the process of building, testing, and deploying software. It can run various types of jobs, including builds, tests, and deployments, based on the instructions provided in the .gitlab-ci.yml configuration file.

Why Use GitLab Runner on Kubernetes?

Scalability: Kubernetes allows for easy scaling of resources. GitLab Runner deployments can dynamically scale based on workload demands, ensuring optimal resource utilization.

Isolation: Kubernetes provides container orchestration, allowing GitLab Runner jobs to run in isolated environments (pods). This isolation ensures that jobs do not interfere with each other and provides security benefits.

Resource Efficiency: GitLab Runner on Kubernetes can efficiently utilize cluster resources by scheduling jobs on available nodes, thereby maximizing resource utilization and minimizing idle capacity.

Consistency: Running GitLab Runners on Kubernetes ensures consistency across different environments, whether it's development, testing, or production. The same Kubernetes environment can be used to run CI/CD pipelines consistently.

Key Components

GitLab Runner: The agent responsible for executing CI/CD jobs defined in .gitlab-ci.yml files. It interacts with GitLab CI/CD and Kubernetes API to schedule and run jobs in Kubernetes pods.

Kubernetes: An open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. GitLab Runner utilizes Kubernetes to manage the lifecycle of CI/CD job pods.

Helm: Helm is a package manager for Kubernetes that allows you to define, install, and manage applications on Kubernetes. GitLab Runner can be deployed on Kubernetes using Helm charts provided by GitLab.

Install Gitlab-runner on Kubernetes

1- Add the helm repo

root@master:~# helm repo add gitlab https://charts.gitlab.io
"gitlab" already exists with the same configuration, skipping

2- Update the repo

root@master:~#helm repo update gitlab
"gitlab" already exists with the same configuration, skipping
root@master:~# helm repo update gitlab
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "gitlab" chart repository
Update Complete. ⎈Happy Helming!⎈

3- Create a values.yaml file

root@master:~# cat values.yaml
env:
  open:
    STORAGE: local
persistence:
  enabled: true
  accessMode: ReadWriteOnce
  size: 8Gi
  storageClass: "managed-nfs-storage"
root@master:~# cat values.yaml
gitlabUrl: https://gitlab.com/

runnerRegistrationToken: "gitlab-runner-token"

concurrent: 10

checkInterval: 30

rbac:
  create: true
  rules:
    - apiGroups: [""]
      resources: ["pods"]
      verbs: ["list", "get", "watch", "create", "delete"]
    - apiGroups: [""]
      resources: ["pods/exec"]
      verbs: ["create"]
    - apiGroups: [""]
      resources: ["pods/log"]
      verbs: ["get"]
    - apiGroups: [""]
      resources: ["pods/attach"]
      verbs: ["list", "get", "create", "delete", "update"]
    - apiGroups: [""]
      resources: ["secrets"]
      verbs: ["list", "get", "create", "delete", "update"]
    - apiGroups: [""]
      resources: ["configmaps"]
      verbs: ["list", "get", "create", "delete", "update"]

runners:
  privileged: true

  config: |
    [[runners]]
      [runners.kubernetes]
        namespace = "gitlab-runner"
        tls_verify = false
        image = "docker:19"
        privileged = false

4- Create the namespace and deploy the helm

root@master:~# kubectl create ns gitlab-runner

root@master:~# helm install gitlab-runner gitlab/gitlab-runner -f values.yaml --namespace gitlab-runner
NAME: gitlab-runner
LAST DEPLOYED: Mon Mar  4 22:09:02 2024
NAMESPACE: gitlab-runner
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Your GitLab Runner should now be registered against the GitLab instance reachable at: "https://gitlab.com/"

5- Verify the runner

root@master:~# kubectl -n gitlab-runner get pods
NAME                             READY   STATUS    RESTARTS   AGE
gitlab-runner-7b8ff76bff-mptdc   1/1     Running   0          39s

Login to https://gitlab.com and verify the runner registration

enter image description here

Create a project in gitlab enter image description here

Create .gitlab-ci.yml and use kubernetes as tag to run the pipeline on Kubernetes

enter image description here

Run the Pipeline

Run the command to verify a new runner get start in gitlab-runner

root@master:~# kubectl -n gitlab-runner get pods
NAME                                                      READY   STATUS     RESTARTS   AGE
gitlab-runner-7b8ff76bff-mptdc                            1/1     Running    0          18m
runner-kea6jzghg-project-45006412-concurrent-0-1heswzap   0/2     Init:0/1   0          56s

Verify the pipeline in Gitlab UI enter image description here

We are able to run our pipeline jobs on our self-hosted runner in kubernetes

ChartMuseum: Simplifying Helm Chart Management

ChartMuseum: Simplifying Helm Chart Management

ChartMuseum is an opensource Helm Chart Repository written in Go Language with support for cloud storage backends, including Google Cloud Storage, Amazon S3, Microsoft Azure Blob Storage, Alibaba Cloud OSS Storage, Openstack Object Storage, Oracle Cloud Infrastructure Object Storage, Baidu Cloud BOS Storage, Tencent Cloud Object Storage, DigitalOcean Spaces, Minio, and etcd.

Prerequisites

  • Helm v3.0.0+
  • A persistent storage resource and RW access to it
  • Kubernetes StorageClass for dynamic provisioning

Verify Helm Version

root@master:~# helm version
version.BuildInfo{Version:"v3.13.1", GitCommit:"3547a4b5bf5edb5478ce352e18858d8a552a4110", GitTreeState:"clean", GoVersion:"go1.20.8"}

Verify the Default Storage

root@master:~# kubectl get sc
NAME                            PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage (default)   arobyte.tech/nfs   Delete          Immediate           false                  118m

Using Local Storage to configure Chartmuseum

root@master:~# cat values.yaml
env:
  open:
    STORAGE: local
persistence:
  enabled: true
  accessMode: ReadWriteOnce
  size: 8Gi
  storageClass: "managed-nfs-storage"

Installation

  • Add repository
root@master:~# helm repo add chartmuseum https://chartmuseum.github.io/charts
  • Install chart (Helm v3)
root@master:~# kubectl create ns chartmuseum
root@master:~# helm install -f values.yaml chartmuseum chartmuseum/chartmuseum  --set env.open.DISABLE_API=false -n chartmuseum

Edit the svc to convert the type to LoadBalancer

root@master:~# kubectl edit svc chartmuseum  -n chartmuseum

    app.kubernetes.io/instance: chartmuseum
    app.kubernetes.io/name: chartmuseum
  sessionAffinity: None
  type: ClusterIP ---> LoadBalancer

Verify The Chartmuseum is installed successfully

root@master:~# kubectl get pods,svc -n chartmuseum
NAME                               READY   STATUS    RESTARTS   AGE
pod/chartmuseum-6f5dcc8787-7r9lm   1/1     Running   0          54m

NAME                  TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)          AGE
service/chartmuseum   LoadBalancer   10.109.118.6   172.16.16.151   8080:32563/TCP   54m

I am using HA Proxy for getting traffic routed to my cluster

root@master:~# cat /etc/haproxy/haproxy.cfg
frontend chartmuseum
    bind 192.168.0.114:6201
    mode tcp
    default_backend chartmuseum

backend chartmuseum
    mode tcp
    balance roundrobin
    server backend_server 172.16.16.151:8080 check

root@master:~# systemctl restart haproxy

Login to Browser and verify the URL

enter image description here

Install the helm-cm plugin to push charts to chartmuseum.

root@master:~# helm create mychart
root@master:~# cd mychart/
root@master:~/mychart# helm package .
Successfully packaged chart and saved it to: /root/mychart/mychart-0.1.0.tgz

root@master:~/mychart# helm cm-push mychart-0.1.0.tgz http://chartmuseum.arobyte.tech:6201
"chartmuseum-1" has been added to your repositories

Now Access the Browser the verify the Chart enter image description here

Add the repo locally and search for it

root@master:~# helm repo add chartmuseum-local http://chartmuseum.arobyte.tech:6201
"chartmuseum-local" has been added to your repositories
root@master:~# helm search repo chartmuseum-local
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
chartmuseum-local/mychart   0.1.0           1.16.0          A Helm chart for Kubernetes

root@master:~# helm pull chartmuseum-local/mychart
root@master:~# ls -l | grep mychart
-rw-r--r--  1 root root     3957 Feb 29 11:59 mychart-0.1.0.tgz

We can see that we are able to push and pull the charts to and from our local charts.

Building a MultiCluster Environment with Cilium on BareMetal Kubernetes Cluster: A Comprehensive Guide

Building a MultiCluster Environment with Cilium on BareMetal Kubernetes Cluster: A Comprehensive Guide

Multi-Cluster (Cluster Mesh)

Cluster mesh extends the networking datapath across multiple clusters. It allows endpoints in all connected clusters to communicate while providing full policy enforcement. Load-balancing is available via Kubernetes annotations.

Setting up Cluster Mesh A Cilium MultiCluster setup involves deploying Cilium, a powerful networking and security solution for Kubernetes, across multiple Kubernetes clusters. This setup enables communication and workload deployment across clusters, providing benefits such as fault tolerance, disaster recovery, and improved performance. Here's an explanation of various aspects of Cilium MultiCluster

Architecture:

In a MultiCluster architecture with Cilium, you have multiple Kubernetes clusters deployed across different geographic regions, data centers, or cloud providers.

Each Kubernetes cluster runs Cilium as its CNI (Container Network Interface) plugin to manage networking, security, and observability for containerized workloads.

Connectivity:

Cilium facilitates connectivity between pods and services deployed across different Kubernetes clusters.

It achieves this through mechanisms like cluster federation, network peering, or VPN tunnels, depending on the chosen MultiCluster setup.

Service Discovery:

Cilium provides service discovery capabilities across MultiCluster environments, allowing pods in one cluster to discover and communicate with services deployed in other clusters.

This enables seamless communication between microservices distributed across multiple clusters.

Network Policies:

Cilium allows the enforcement of network policies across MultiCluster deployments, ensuring that traffic between clusters adheres to security and compliance requirements.

Network policies define rules for traffic filtering, segmentation, and access control based on various criteria such as IP addresses, ports, and labels.

Identity-Aware Networking:

Cilium supports identity-aware networking across MultiCluster environments, allowing granular control over communication based on workload identities.

Workload identities, such as Kubernetes service accounts or custom attributes, can be used to enforce fine-grained access control policies between clusters.

Observability:

Cilium provides comprehensive observability features for MultiCluster environments, including real-time network visibility, metrics collection, and distributed tracing.

Operators can monitor network traffic, performance metrics, and security events across clusters to ensure operational efficiency and security.

Security:

Cilium enhances security in MultiCluster environments by enforcing security policies, encrypting inter-cluster communication, and providing threat detection capabilities.

It protects against network-based attacks, ensures data confidentiality, and enables secure communication between clusters.

Scalability and Performance:

Cilium is designed for scalability and performance in MultiCluster deployments, leveraging eBPF (extended Berkeley Packet Filter) technology for efficient packet processing and low-latency networking.

It supports large-scale deployments spanning multiple clusters while maintaining high throughput and low latency for inter-cluster communication.

Cilium MultiCluster offers a robust networking and security solution for Kubernetes environments spanning multiple clusters. It enables seamless communication, service discovery, and security enforcement across clusters, empowering organizations to build resilient, scalable, and secure distributed applications.

Setting up Cluster Mesh This is a step-by-step guide on how to build a mesh of Kubernetes clusters by connecting them together, enable pod-to-pod connectivity across all clusters, define global services to load-balance between clusters.

Prerequisites

2 Kubernetes Clusters

Cluster-A

root@master:~# kubectl get nodes
NAME                   STATUS   ROLES           AGE   VERSION
master.arobyte.tech    Ready    control-plane   58m   v1.26.0
worker1.arobyte.tech   NotReady    <none>          41m   v1.26.0
worker2.arobyte.tech   NotReady    <none>          40m   v1.26.0

Cluster-B

root@devmaster:~# kubectl get nodes
NAME                      STATUS   ROLES           AGE     VERSION
devmaster.arobyte.tech    Ready    control-plane   3h57m   v1.26.0
devworker1.arobyte.tech   NotReady    <none>          39m     v1.26.0
devworker2.arobyte.tech   NotReady    <none>          38m     v1.26.0

Let's Install Cilium on Both The Cluster

Cluster-A

root@master:~# kubectl config use-context master

root@master:~# helm repo add cilium https://helm.cilium.io/

helm install cilium cilium/cilium --version 1.15.1 \
   --namespace kube-system \
   --set nodeinit.enabled=true \
   --set kubeProxyReplacement=partial \
   --set hostServices.enabled=false \
   --set externalIPs.enabled=true \
   --set nodePort.enabled=true \
   --set hostPort.enabled=true \
   --set cluster.name=master \
   --set cluster.id=1

root@master:~# kubectl get nodes
NAME                   STATUS   ROLES           AGE   VERSION
master.arobyte.tech    Ready    control-plane   58m   v1.26.0
worker1.arobyte.tech   Ready    <none>          41m   v1.26.0
worker2.arobyte.tech   Ready    <none>          40m   v1.26.0

Cluster-B

root@master:~# kubectl config use-context devmaster

root@master:~# helm install cilium cilium/cilium --version 1.15.1 \
   --namespace kube-system \
   --set nodeinit.enabled=true \
   --set kubeProxyReplacement=partial \
   --set hostServices.enabled=false \
   --set externalIPs.enabled=true \
   --set nodePort.enabled=true \
   --set hostPort.enabled=true \
   --set cluster.name=devmaster \
   --set cluster.id=2

root@master:~# kubectl get pods -n metallb-system
NAME                                 READY   STATUS    RESTARTS   AGE
metallb-controller-9fcb75cf5-8s5w4   1/1     Running   0          35m
metallb-speaker-v769s                4/4     Running   0          35m
metallb-speaker-vfg62                4/4     Running   0          35m
metallb-speaker-www9n                4/4     Running   0          35m

Metallb Load Balancer Must be installed on Both the Cluster For Load balancer.

Cluster-A

root@master:~# kubectl get pods -n metallb-system
NAME                                 READY   STATUS    RESTARTS   AGE
metallb-controller-9fcb75cf5-mds7w   1/1     Running   0          30m
metallb-speaker-cfsw4                4/4     Running   0          30m
metallb-speaker-nblvz                4/4     Running   0          30m
metallb-speaker-sr68w                4/4     Running   0          30m

Cluster-B

root@devmaster:~# kubectl get pods -n metallb-system
NAME                                 READY   STATUS    RESTARTS   AGE
metallb-controller-9fcb75cf5-8s5w4   1/1     Running   0          35m
metallb-speaker-v769s                4/4     Running   0          35m
metallb-speaker-vfg62                4/4     Running   0          35m
metallb-speaker-www9n                4/4     Running   0          35m

Now We can see on Cluster-A and Cluster-B all the nodes are in Ready state after installing Cilium CNI

Enable Cilium Multicluster on Both Clusters

root@master:~# cilium clustermesh enable --context master --service-type LoadBalancer
root@master:~# cilium clustermesh status --context master --wait
βœ… Service "clustermesh-apiserver" of type "LoadBalancer" found
βœ… Cluster access information is available:
  - 172.16.16.150:2379
βŒ› Waiting (0s) for deployment clustermesh-apiserver to become ready: only 0 of 1 replicas are available

root@master:~# cilium clustermesh enable --context devmaster --service-type LoadBalancer
root@master:~# cilium clustermesh status --context master --wait
βœ… Service "clustermesh-apiserver" of type "LoadBalancer" found
βœ… Cluster access information is available:
  - 172.16.16.150:2379
βœ… Deployment clustermesh-apiserver is ready
πŸ”Œ No cluster connected
πŸ”€ Global services: [ min:0 / avg:0.0 / max:0 ]

root@master:~# cilium clustermesh status --context devmaster --wait
βœ… Service "clustermesh-apiserver" of type "LoadBalancer" found
βœ… Cluster access information is available:
  - 172.17.17.150:2379
βœ… Deployment clustermesh-apiserver is ready
πŸ”Œ No cluster connected
πŸ”€ Global services: [ min:0 / avg:0.0 / max:0 ]

We can see cluster mesh installed Successfully on both Clusters

Lets connect both clusters together

root@master:~# cilium clustermesh connect --context master \
   --destination-context devmaster
βœ… Detected Helm release with Cilium version 1.15.1
✨ Extracting access information of cluster devmaster...
πŸ”‘ Extracting secrets from cluster devmaster...
ℹ️  Found ClusterMesh service IPs: [172.17.17.150]
✨ Extracting access information of cluster master...
πŸ”‘ Extracting secrets from cluster master...
ℹ️  Found ClusterMesh service IPs: [172.16.16.150]
⚠️ Cilium CA certificates do not match between clusters. Multicluster features will be limited!
ℹ️ Configuring Cilium in cluster 'master' to connect to cluster 'devmaster'
ℹ️ Configuring Cilium in cluster 'devmaster' to connect to cluster 'master'
βœ… Connected cluster master and devmaster!

Let’s verify the status of the Cilium cluster mesh once again

root@master:~# cilium clustermesh status --context master --wait
βœ… Service "clustermesh-apiserver" of type "LoadBalancer" found
βœ… Cluster access information is available:
  - 172.16.16.150:2379
βœ… Deployment clustermesh-apiserver is ready
βœ… All 3 nodes are connected to all clusters [min:1 / avg:1.0 / max:1]
πŸ”Œ Cluster Connections:
  - devmaster: 3/3 configured, 3/3 connected
πŸ”€ Global services: [ min:0 / avg:0.0 / max:0 ]


root@master:~# cilium clustermesh status --context devmaster --wait
βœ… Service "clustermesh-apiserver" of type "LoadBalancer" found
βœ… Cluster access information is available:
  - 172.17.17.150:2379
βœ… Deployment clustermesh-apiserver is ready
βœ… All 3 nodes are connected to all clusters [min:1 / avg:1.0 / max:1]
πŸ”Œ Cluster Connections:
  - master: 3/3 configured, 3/3 connected
πŸ”€ Global services: [ min:0 / avg:0.0 / max:0 ]

Verify the Kubernetes Service with Cilium Mesh Control Plane

root@master:~# kubectl config use-context devmaster
root@master:~# kubectl get svc -A | grep clustermesh
kube-system      clustermesh-apiserver           LoadBalancer   10.98.126.70     172.17.17.150   2379:32200/TCP           34m
kube-system      clustermesh-apiserver-metrics   ClusterIP      None             <none>          9962/TCP,9963/TCP        34m

root@master:~# kubectl config use-context master
Switched to context "master".
root@master:~# kubectl get svc -A | grep clustermesh
kube-system      clustermesh-apiserver           LoadBalancer   10.105.111.27    172.16.16.150   2379:31285/TCP           37m
kube-system      clustermesh-apiserver-metrics   ClusterIP      None             <none>          9962/TCP,9963/TCP        37m

Validate the connectivity by running the connectivity test in multi-cluster mode

root@master:~# cilium connectivity test --context master --multi-cluster devmaster
ℹ️  Monitor aggregation detected, will skip some flow validation steps
✨ [master] Creating namespace cilium-test for connectivity check...
✨ [devmaster] Creating namespace cilium-test for connectivity check...
✨ [master] Deploying echo-same-node service...
✨ [master] Deploying echo-other-node service...
✨ [master] Deploying DNS test server configmap...
✨ [devmaster] Deploying DNS test server configmap...
✨ [master] Deploying same-node deployment...
✨ [master] Deploying client deployment...
✨ [master] Deploying client2 deployment...
✨ [devmaster] Deploying echo-other-node service...
✨ [devmaster] Deploying other-node deployment...
✨ [host-netns] Deploying master daemonset...
✨ [host-netns] Deploying devmaster daemonset...
✨ [host-netns-non-cilium] Deploying master daemonset...
ℹ️  Skipping tests that require a node Without Cilium
βŒ› [master] Waiting for deployment cilium-test/client to become ready...
βŒ› [master] Waiting for deployment cilium-test/client2 to become ready...
βŒ› [master] Waiting for deployment cilium-test/echo-same-node to become ready...
βŒ› [devmaster] Waiting for deployment cilium-test/echo-other-node to become ready...
βŒ› [master] Waiting for pod cilium-test/client-65847bf96-2mqtd to reach DNS server on cilium-test/echo-same-node-66f8958597-mbr7s pod...
βŒ› [master] Waiting for pod cilium-test/client2-85585bdd-bnn77 to reach DNS server on cilium-test/echo-same-node-66f8958597-mbr7s pod...
βŒ› [master] Waiting for pod cilium-test/client-65847bf96-2mqtd to reach DNS server on cilium-test/echo-other-node-5db9b7bbbc-lvsqj pod...
βŒ› [master] Waiting for pod cilium-test/client2-85585bdd-bnn77 to reach DNS server on cilium-test/echo-other-node-5db9b7bbbc-lvsqj pod...
βŒ› [master] Waiting for pod cilium-test/client-65847bf96-2mqtd to reach default/kubernetes service...
βŒ› [master] Waiting for pod cilium-test/client2-85585bdd-bnn77 to reach default/kubernetes service...
βŒ› [master] Waiting for Service cilium-test/echo-other-node to become ready...
βŒ› [master] Waiting for Service cilium-test/echo-other-node to be synchronized by Cilium pod kube-system/cilium-6qfz6
βŒ› [master] Waiting for Service cilium-test/echo-same-node to become ready...
βŒ› [master] Waiting for Service cilium-test/echo-same-node to be synchronized by Cilium pod kube-system/cilium-6qfz6
βŒ› [master] Waiting for DaemonSet cilium-test/host-netns-non-cilium to become ready...
βŒ› [master] Waiting for DaemonSet cilium-test/host-netns to become ready...
βŒ› [devmaster] Waiting for DaemonSet cilium-test/host-netns to become ready...
ℹ️  Skipping IPCache check
πŸ”­ Enabling Hubble telescope...
⚠️  Unable to contact Hubble Relay, disabling Hubble telescope and flow validation: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:4245: connect: connection refused"
ℹ️  Expose Relay locally with:
   cilium hubble enable
   cilium hubble port-forward&
ℹ️  Cilium version: 1.15.1
πŸƒ Running 66 tests ...
[=] Test [no-unexpected-packet-drops] [1/66]

  [-] Scenario [no-unexpected-packet-drops/no-unexpected-packet-drops]
  πŸŸ₯ Found unexpected packet drops:
{
  "labels": {
    "direction": "EGRESS",
    "reason": "Unsupported L2 protocol"
  },
  "name": "cilium_drop_count_total",
  "value": 428
}
[=] Test [no-policies] [2/66]
.....................
  [-] Scenario [no-policies/pod-to-pod]
  [.] Action [no-policies/pod-to-pod/curl-ipv4-0: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> cilium-test/echo-same-node-66f8958597-mbr7s (10.0.2.109:8080)]
  [.] Action [no-policies/pod-to-pod/curl-ipv4-1: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> cilium-test/echo-other-node-5db9b7bbbc-lvsqj (10.0.2.163:8080)]
  [.] Action [no-policies/pod-to-pod/curl-ipv4-2: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> cilium-test/echo-same-node-66f8958597-mbr7s (10.0.2.109:8080)]
  [.] Action [no-policies/pod-to-pod/curl-ipv4-3: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> cilium-test/echo-other-node-5db9b7bbbc-lvsqj (10.0.2.163:8080)]
  [-] Scenario [no-policies/pod-to-world]
  [.] Action [no-policies/pod-to-world/http-to-one.one.one.one-0: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> one.one.one.one-http (one.one.one.one:80)]
  [.] Action [no-policies/pod-to-world/https-to-one.one.one.one-0: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> one.one.one.one-https (one.one.one.one:443)]
  [.] Action [no-policies/pod-to-world/https-to-one.one.one.one-index-0: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> one.one.one.one-https-index (one.one.one.one:443)]
  [.] Action [no-policies/pod-to-world/http-to-one.one.one.one-1: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> one.one.one.one-http (one.one.one.one:80)]
  [.] Action [no-policies/pod-to-world/https-to-one.one.one.one-1: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> one.one.one.one-https (one.one.one.one:443)]
  [.] Action [no-policies/pod-to-world/https-to-one.one.one.one-index-1: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> one.one.one.one-https-index (one.one.one.one:443)]
  [-] Scenario [no-policies/pod-to-cidr]
  [.] Action [no-policies/pod-to-cidr/external-1111-0: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> external-1111 (1.1.1.1:443)]
  [.] Action [no-policies/pod-to-cidr/external-1111-1: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> external-1111 (1.1.1.1:443)]
  [.] Action [no-policies/pod-to-cidr/external-1001-0: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> external-1001 (1.0.0.1:443)]
  [.] Action [no-policies/pod-to-cidr/external-1001-1: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> external-1001 (1.0.0.1:443)]
  [-] Scenario [no-policies/client-to-client]
  [.] Action [no-policies/client-to-client/ping-ipv4-0: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> cilium-test/client2-85585bdd-bnn77 (10.0.2.148:0)]
  [.] Action [no-policies/client-to-client/ping-ipv4-1: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> cilium-test/client-65847bf96-2mqtd (10.0.2.95:0)]
  [-] Scenario [no-policies/pod-to-service]
  [.] Action [no-policies/pod-to-service/curl-0: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> cilium-test/echo-other-node (echo-other-node.cilium-test:8080)]
  [.] Action [no-policies/pod-to-service/curl-1: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> cilium-test/echo-same-node (echo-same-node.cilium-test:8080)]
  [.] Action [no-policies/pod-to-service/curl-2: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> cilium-test/echo-same-node (echo-same-node.cilium-test:8080)]
  [.] Action [no-policies/pod-to-service/curl-3: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> cilium-test/echo-other-node (echo-other-node.cilium-test:8080)]
  [-] Scenario [no-policies/pod-to-hostport]
  [.] Action [no-policies/pod-to-hostport/curl-0: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> cilium-test/echo-other-node-5db9b7bbbc-lvsqj (172.17.17.101:4000)]
  ❌ command "curl -w %{local_ip}:%{local_port} -> %{remote_ip}:%{remote_port} = %{response_code} --silent --fail --show-error --output /dev/null --connect-timeout 2 --max-time 10 http://172.17.17.101:4000" failed: error with exec request (pod=cilium-test/client-65847bf96-2mqtd, container=client): command terminated with exit code 28
  ℹ️  curl output:


  [.] Action [no-policies/pod-to-hostport/curl-1: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> cilium-test/echo-same-node-66f8958597-mbr7s (172.16.16.102:4000)]
  [.] Action [no-policies/pod-to-hostport/curl-2: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> cilium-test/echo-same-node-66f8958597-mbr7s (172.16.16.102:4000)]
  [.] Action [no-policies/pod-to-hostport/curl-3: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> cilium-test/echo-other-node-5db9b7bbbc-lvsqj (172.17.17.101:4000)]
  ❌ command "curl -w %{local_ip}:%{local_port} -> %{remote_ip}:%{remote_port} = %{response_code} --silent --fail --show-error --output /dev/null --connect-timeout 2 --max-time 10 http://172.17.17.101:4000" failed: error with exec request (pod=cilium-test/client2-85585bdd-bnn77, container=client2): command terminated with exit code 28
  ℹ️  curl output:


  [-] Scenario [no-policies/pod-to-host]
  [.] Action [no-policies/pod-to-host/ping-ipv4-internal-ip: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> 192.168.0.114 (192.168.0.114:0)]
  [.] Action [no-policies/pod-to-host/ping-ipv4-internal-ip: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> 172.16.16.101 (172.16.16.101:0)]
  [.] Action [no-policies/pod-to-host/ping-ipv4-internal-ip: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> 172.16.16.102 (172.16.16.102:0)]
  [.] Action [no-policies/pod-to-host/ping-ipv4-internal-ip: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> 192.168.0.114 (192.168.0.114:0)]
  [.] Action [no-policies/pod-to-host/ping-ipv4-internal-ip: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> 172.16.16.101 (172.16.16.101:0)]
  [.] Action [no-policies/pod-to-host/ping-ipv4-internal-ip: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> 172.16.16.102 (172.16.16.102:0)]
  [-] Scenario [no-policies/host-to-pod]
  [.] Action [no-policies/host-to-pod/curl-ipv4-0: cilium-test/host-netns-gnqtv (172.16.16.101) -> cilium-test/echo-same-node-66f8958597-mbr7s (10.0.2.109:8080)]
  [.] Action [no-policies/host-to-pod/curl-ipv4-1: cilium-test/host-netns-gnqtv (172.16.16.101) -> cilium-test/echo-other-node-5db9b7bbbc-lvsqj (10.0.2.163:8080)]
  [.] Action [no-policies/host-to-pod/curl-ipv4-2: cilium-test/host-netns-rswsn (172.17.17.102) -> cilium-test/echo-same-node-66f8958597-mbr7s (10.0.2.109:8080)]
  [.] Action [no-policies/host-to-pod/curl-ipv4-3: cilium-test/host-netns-rswsn (172.17.17.102) -> cilium-test/echo-other-node-5db9b7bbbc-lvsqj (10.0.2.163:8080)]
  [.] Action [no-policies/host-to-pod/curl-ipv4-4: cilium-test/host-netns-xfkhb (172.17.17.101) -> cilium-test/echo-same-node-66f8958597-mbr7s (10.0.2.109:8080)]
  [.] Action [no-policies/host-to-pod/curl-ipv4-5: cilium-test/host-netns-xfkhb (172.17.17.101) -> cilium-test/echo-other-node-5db9b7bbbc-lvsqj (10.0.2.163:8080)]
  [.] Action [no-policies/host-to-pod/curl-ipv4-6: cilium-test/host-netns-5x8wz (172.16.16.102) -> cilium-test/echo-same-node-66f8958597-mbr7s (10.0.2.109:8080)]
  [.] Action [no-policies/host-to-pod/curl-ipv4-7: cilium-test/host-netns-5x8wz (172.16.16.102) -> cilium-test/echo-other-node-5db9b7bbbc-lvsqj (10.0.2.163:8080)]
  [-] Scenario [no-policies/pod-to-external-workload]
[=] Test [no-policies-extra] [3/66]
.
  [-] Scenario [no-policies-extra/pod-to-remote-nodeport]
  [.] Action [no-policies-extra/pod-to-remote-nodeport/curl-0: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> cilium-test/echo-other-node (echo-other-node.cilium-test:8080)]
  ❌ command "curl -w %{local_ip}:%{local_port} -> %{remote_ip}:%{remote_port} = %{response_code} --silent --fail --show-error --output /dev/null --connect-timeout 2 --max-time 10 http://192.168.0.114:32184" failed: error with exec request (pod=cilium-test/client-65847bf96-2mqtd, container=client): command terminated with exit code 7
  ℹ️  curl output:


  [.] Action [no-policies-extra/pod-to-remote-nodeport/curl-1: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> cilium-test/echo-other-node (echo-other-node.cilium-test:8080)]
  [.] Action [no-policies-extra/pod-to-remote-nodeport/curl-2: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> cilium-test/echo-same-node (echo-same-node.cilium-test:8080)]
  [.] Action [no-policies-extra/pod-to-remote-nodeport/curl-3: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> cilium-test/echo-same-node (echo-same-node.cilium-test:8080)]
  [.] Action [no-policies-extra/pod-to-remote-nodeport/curl-4: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> cilium-test/echo-other-node (echo-other-node.cilium-test:8080)]
  ❌ command "curl -w %{local_ip}:%{local_port} -> %{remote_ip}:%{remote_port} = %{response_code} --silent --fail --show-error --output /dev/null --connect-timeout 2 --max-time 10 http://192.168.0.114:32184" failed: error with exec request (pod=cilium-test/client2-85585bdd-bnn77, container=client2): command terminated with exit code 7
  ℹ️  curl output:


  [.] Action [no-policies-extra/pod-to-remote-nodeport/curl-5: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> cilium-test/echo-other-node (echo-other-node.cilium-test:8080)]
  [.] Action [no-policies-extra/pod-to-remote-nodeport/curl-6: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> cilium-test/echo-same-node (echo-same-node.cilium-test:8080)]
  [.] Action [no-policies-extra/pod-to-remote-nodeport/curl-7: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> cilium-test/echo-same-node (echo-same-node.cilium-test:8080)]
  [-] Scenario [no-policies-extra/pod-to-local-nodeport]
  [.] Action [no-policies-extra/pod-to-local-nodeport/curl-0: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> cilium-test/echo-other-node (echo-other-node.cilium-test:8080)]
  [.] Action [no-policies-extra/pod-to-local-nodeport/curl-1: cilium-test/client-65847bf96-2mqtd (10.0.2.95) -> cilium-test/echo-same-node (echo-same-node.cilium-test:8080)]
  [.] Action [no-policies-extra/pod-to-local-nodeport/curl-2: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> cilium-test/echo-other-node (echo-other-node.cilium-test:8080)]
  [.] Action [no-policies-extra/pod-to-local-nodeport/curl-3: cilium-test/client2-85585bdd-bnn77 (10.0.2.148) -> cilium-test/echo-same-node (echo-same-node.cilium-test:8080)]
[=] Test [allow-all-except-world] [4/66]
................
[=] Test [client-ingress] [5/66]
------------
----------------
---------------
-------------------

Deploying a Simple Example Service

In Cluster-A, deploy:

root@master:~# kubectl config use-context master
Switched to context "master"
root@master:~# kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.15.1/examples/kubernetes/clustermesh/global-service-example/cluster1.yaml
service/rebel-base created
deployment.apps/rebel-base created
configmap/rebel-base-response created
deployment.apps/x-wing created

In Cluster-B, deploy:

root@master:~# kubectl config use-context devmaster
Switched to context "devmaster".
root@master:~# kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.15.1/examples/kubernetes/clustermesh/global-service-example/cluster2.yaml
service/rebel-base created
deployment.apps/rebel-base created
configmap/rebel-base-response created
deployment.apps/x-wing created

Let's validate the deployment on both the clusters

Cluster-A

root@master:~# kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
rebel-base-5f575fcc5d-46sx4   1/1     Running   0          15m
rebel-base-5f575fcc5d-62nqv   1/1     Running   0          15m
x-wing-58c9c6d949-lgv7j       1/1     Running   0          15m
x-wing-58c9c6d949-vqqxv       1/1     Running   0          15m

Cluster-B

root@devmaster:~# kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
rebel-base-5f575fcc5d-8jfpl   1/1     Running   0          13m
rebel-base-5f575fcc5d-cnw8x   1/1     Running   0          13m
x-wing-58c9c6d949-5nbfh       1/1     Running   0          13m
x-wing-58c9c6d949-tbwfq       1/1     Running   0          13m

From either cluster, access the global service

root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}

We will see replies from pods in both clusters.

In Cluster-A, add service.cilium.io/shared="false" to existing global service

root@master:~# kubectl annotate service rebel-base service.cilium.io/shared="false" --overwrite
service/rebel-base annotated

From Cluster-A, access the global service one more time

root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}

We will still see replies from pods in both clusters

From Cluster-B, access the global service again

root@master:~# kubectl annotate service rebel-base service.cilium.io/shared-
service/rebel-base annotated
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}

We will see replies from pods only from cluster 2, as the global service in cluster 1 is no longer shared

In Cluster-A, remove service.cilium.io/shared annotation of existing global service

root@master:~# kubectl config use-context master
Switched to context "master".
root@master:~# kubectl annotate service rebel-base service.cilium.io/shared-
service/rebel-base annotated

From either cluster, access the global service

root@master:~# kubectl config use-context master
Switched to context "master".
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}

root@master:~# kubectl config use-context devmaster
Switched to context "devmaster".
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
root@master:~# kubectl exec -ti deployment/x-wing -- curl rebel-base
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}

We will see replies from pods in both clusters again

A Comprehensive Guide to Linkerd Service Mesh in Kubernetes

A Comprehensive Guide to Linkerd Service Mesh in Kubernetes

Service Mesh

enter image description here

Service Mesh is a dedicated infrastructure layer that controls service-to-service communication over a network. This method enables separate parts of an application to communicate with each other. Service meshes appear commonly in concert with cloud-based applications, containers and microservices. A service mesh controls the delivery of service requests in an application. Common features provided by a service mesh include service discovery, load balancing, encryption and failure recovery. High availability is also common through the use of software controlled by APIs rather than through hardware. Service meshes can make service-to-service communication fast, reliable and secure.

How a service mesh works

A service mesh architecture uses a proxy instance called a sidecar in whichever development paradigm is in use, typically containers and/or microservices. In a microservice application, a sidecar attaches to each service. In a container, the sidecar attaches to each application container, VM or container orchestration unit, such as a Kubernetes pod. Sidecars can handle tasks abstracted from the service itself, such as monitoring and security.

Service instances, sidecars and their interactions make up what is called the data plane in a service mesh. A different layer called the control plane manages tasks such as creating instances, monitoring and implementing policies for network management and security. Control planes can connect to a CLI or a GUI interface for application management.

Why adopt a service mesh?

An application structured in a microservices architecture might comprise dozens or hundreds of services, all with their own instances that operate in a live environment. It's a big challenge for developers to keep track of which components must interact, monitor their health and performance and make changes to a service or component if something goes wrong.

A service mesh enables developers to separate and manage service-to-service communications in a dedicated infrastructure layer. As the number of microservices involved with an application increases, so do the benefits of using a service mesh to manage and monitor them.

Key features of a service mesh A service mesh framework typically provides many capabilities that make containerized and microservices communications more reliable, secure and observable.

Reliability. Managing communications through sidecar proxies and the control plane improves efficiency and reliability of service requests, policies and configurations. Specific capabilities include load balancing and fault injection.

Observability. Service mesh frameworks can provide insights into the behavior and health of services. The control plane can collect and aggregate telemetry data from component interactions to determine service health, such as traffic and latency, distributed tracing and access logs. Third-party integration with tools, such as Prometheus, Elasticsearch and Grafana, enables further monitoring and visualization.

Security. Service mesh can automatically encrypt communications and distribute security policies, including authentication and authorization, from the network to the application and individual microservices. Centrally managing security policies through the control plane and sidecar proxies helps keep up with increasingly complex connections within and between distributed applications.

Linkerd Service Mesh Linkerd is a service mesh for Kubernetes. It makes running services easier and safer by giving you runtime debugging, observability, reliability, and securityβ€”all without requiring any changes to your code.

How it works Linkerd works by installing a set of ultralight, transparent β€œmicro-proxies” next to each service instance. These proxies automatically handle all traffic to and from the service. Because they’re transparent, these proxies act as highly instrumented out-of-process network stacks, sending telemetry to, and receiving control signals from, the control plane. This design allows Linkerd to measure and manipulate traffic to and from your service without introducing excessive latency. enter image description here

Installation

Setup Validate your Kubernetes setup by running:

root@master:~# kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.26.0
Kustomize Version: v4.5.7
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

Install the CLI

root@master:~# curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
Downloading linkerd2-cli-edge-24.2.5-linux-amd64...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 52.5M  100 52.5M    0     0  1485k      0  0:00:36  0:00:36 --:--:-- 1796k
Download complete!

Validating checksum...
Checksum valid.

Linkerd edge-24.2.5 was successfully installed πŸŽ‰


*******************************************
* This script is deprecated and no longer *
* installs stable releases.               *
*                                         *
* The latest edge release has been        *
* installed. In the future, please use    *
*   run.linkerd.io/install-edge           *
* for this behavior.                      *
*                                         *
* For stable releases, please see         *
*  https://linkerd.io/releases/           *
*******************************************

Add the linkerd CLI to your path with:

  export PATH=$PATH:/root/.linkerd2/bin

root@master:~# export PATH=$HOME/.linkerd2/bin:$PATH

root@master:~# linkerd version
Client version: edge-24.2.5
Server version: unavailable

You should see the CLI version, and also Server version: unavailable. This is because you haven’t installed the control plane on your cluster. Don’t worryβ€”we’ll fix that soon enough. Make sure that your Linkerd version and Kubernetes version are compatible by checking Linkerd’s supported Kubernetes versions.

Validate Kubernetes Cluster

root@master:~# linkerd check --pre
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version

pre-kubernetes-setup
--------------------
√ control plane namespace does not already exist
√ can create non-namespaced resources
√ can create ServiceAccounts
√ can create Services
√ can create Deployments
√ can create CronJobs
√ can create ConfigMaps
√ can create Secrets
√ can read Secrets
√ can read extension-apiserver-authentication configmap
√ no clock skew detected

linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date

Status check results are √

Install Linkerd CRD onto cluster

root@master:~# linkerd install --crds | kubectl apply -f -
Rendering Linkerd CRDs...
Next, run `linkerd install | kubectl apply -f -` to install the control plane.

customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/meshtlsauthentications.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/networkauthentications.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/serverauthorizations.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/servers.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/serviceprofiles.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/externalworkloads.workload.linkerd.io created

Install Linkerd onto cluster

root@master:~# linkerd install | kubectl apply -f -
namespace/linkerd created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created
serviceaccount/linkerd-identity created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-destination created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-destination created
serviceaccount/linkerd-destination created
secret/linkerd-sp-validator-k8s-tls created
validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-sp-validator-webhook-config created
secret/linkerd-policy-validator-k8s-tls created
validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-policy-validator-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-policy created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-destination-policy created
role.rbac.authorization.k8s.io/remote-discovery created
rolebinding.rbac.authorization.k8s.io/linkerd-destination-remote-discovery created
role.rbac.authorization.k8s.io/linkerd-heartbeat created
rolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created
clusterrole.rbac.authorization.k8s.io/linkerd-heartbeat created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created
serviceaccount/linkerd-heartbeat created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
serviceaccount/linkerd-proxy-injector created
secret/linkerd-proxy-injector-k8s-tls created
mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-proxy-injector-webhook-config created
configmap/linkerd-config created
role.rbac.authorization.k8s.io/ext-namespace-metadata-linkerd-config created
secret/linkerd-identity-issuer created
configmap/linkerd-identity-trust-roots created
service/linkerd-identity created
service/linkerd-identity-headless created
deployment.apps/linkerd-identity created
service/linkerd-dst created
service/linkerd-dst-headless created
service/linkerd-sp-validator created
service/linkerd-policy created
service/linkerd-policy-validator created
deployment.apps/linkerd-destination created
cronjob.batch/linkerd-heartbeat created
deployment.apps/linkerd-proxy-injector created
service/linkerd-proxy-injector created
secret/linkerd-config-overrides created

Check Linkerd Status

root@master:~# linkerd check 

Install the Demo App

root@master:~# curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/emojivoto.yml | kubectl apply -f -
namespace/emojivoto created
serviceaccount/emoji created
serviceaccount/voting created
serviceaccount/web created
service/emoji-svc created
service/voting-svc created
service/web-svc created
deployment.apps/emoji created
deployment.apps/vote-bot created
deployment.apps/voting created
deployment.apps/web created

Validate the App

root@master:~# kubectl get all -n emojivoto
NAME                            READY   STATUS    RESTARTS   AGE
pod/emoji-5f844f4cb5-wv7d4      1/1     Running   0          3m12s
pod/vote-bot-5c64bd6898-bqwqf   1/1     Running   0          3m12s
pod/voting-7b77644858-slxk6     1/1     Running   0          3m11s
pod/web-58fb4f6fb6-9rp92        1/1     Running   0          3m11s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/emoji-svc    ClusterIP   10.111.202.78   <none>        8080/TCP,8801/TCP   3m13s
service/voting-svc   ClusterIP   10.111.34.59    <none>        8080/TCP,8801/TCP   3m13s
service/web-svc      ClusterIP   10.102.200.28   <none>        80/TCP              3m12s

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/emoji      1/1     1            1           3m12s
deployment.apps/vote-bot   1/1     1            1           3m12s
deployment.apps/voting     1/1     1            1           3m12s
deployment.apps/web        1/1     1            1           3m11s

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/emoji-5f844f4cb5      1         1         1       3m12s
replicaset.apps/vote-bot-5c64bd6898   1         1         1       3m12s
replicaset.apps/voting-7b77644858     1         1         1       3m12s
replicaset.apps/web-58fb4f6fb6        1         1         1       3m11s

Lets Verify the App

root@master:~# kubectl -n emojivoto port-forward svc/web-svc 8080:80
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
^Croot@master:~kubectl -n emojivoto port-forward svc/web-svc 8080:80 --address 0.0.0.0.0
Forwarding from 0.0.0.0:8080 -> 8080
Handling connection for 8080

enter image description here

If you click around Emojivoto, you might notice that it’s a little broken! For example, if you try to vote for the donut emoji, you’ll get a 404 page. enter image description here

Now to debug the issue we need to check the respective pod logs. Now lets see how mesh service help us to troubleshoot the issue

Lets inject the linkerd to mesh the Emojivoto application

We can do this on a live application without downtime, thanks to Kubernetes’s rolling deploys

root@master:~kubectl get -n emojivoto deploy -o yaml | linkerd inject - | kubectl apply -f -
deployment "emoji" injected
deployment "vote-bot" injected
deployment "voting" injected
deployment "web" injected

deployment.apps/emoji configured
deployment.apps/vote-bot configured
deployment.apps/voting configured
deployment.apps/web configured

We have used linkerd inject to inject mesh in the application

Congratulations! You’ve now added Linkerd to an application! Just as with the control plane, it’s possible to verify that everything is working the way it should on the data plane side. Check your data plane with:

root@master:~# linkerd -n emojivoto check --proxy
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version

linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ control plane pods are ready
√ cluster networks contains all node podCIDRs
√ cluster networks contains all pods
√ cluster networks contains all services

linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ proxy-init container runs as root user if docker container runtime is used

linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor

linkerd-webhooks-and-apisvc-tls
-------------------------------
√ proxy-injector webhook has valid cert
√ proxy-injector cert is valid for at least 60 days
√ sp-validator webhook has valid cert
√ sp-validator cert is valid for at least 60 days
√ policy-validator webhook has valid cert
√ policy-validator cert is valid for at least 60 days

linkerd-identity-data-plane
---------------------------
√ data plane proxies certificate match CA

linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date

linkerd-control-plane-proxy
---------------------------
√ control plane proxies are healthy
√ control plane proxies are up-to-date
√ control plane proxies and cli versions match

linkerd-data-plane
------------------
√ data plane namespace exists
√ data plane proxies are ready
√ data plane is up-to-date
√ data plane and cli versions match
√ data plane pod labels are configured correctly
√ data plane service labels are configured correctly
√ data plane service annotations are configured correctly
√ opaque ports are properly annotated

Status check results are √

Explore Linkerd

Let’s install the viz extension, which will install an on-cluster metric stack and dashboard.

root@master:~linkerd viz install | kubectl apply -f - -
namespace/linkerd-viz created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created
serviceaccount/metrics-api created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created
serviceaccount/prometheus created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-admin created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-delegator created
serviceaccount/tap created
rolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-reader created
secret/tap-k8s-tls created
apiservice.apiregistration.k8s.io/v1alpha1.tap.linkerd.io created
role.rbac.authorization.k8s.io/web created
rolebinding.rbac.authorization.k8s.io/web created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-admin created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created
serviceaccount/web created
service/metrics-api created
deployment.apps/metrics-api created
server.policy.linkerd.io/metrics-api created
authorizationpolicy.policy.linkerd.io/metrics-api created
meshtlsauthentication.policy.linkerd.io/metrics-api-web created
networkauthentication.policy.linkerd.io/kubelet created
configmap/prometheus-config created
service/prometheus created
deployment.apps/prometheus created
server.policy.linkerd.io/prometheus-admin created
authorizationpolicy.policy.linkerd.io/prometheus-admin created
service/tap created
deployment.apps/tap created
server.policy.linkerd.io/tap-api created
authorizationpolicy.policy.linkerd.io/tap created
clusterrole.rbac.authorization.k8s.io/linkerd-tap-injector created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-tap-injector created
serviceaccount/tap-injector created
secret/tap-injector-k8s-tls created
mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-tap-injector-webhook-config created
service/tap-injector created
deployment.apps/tap-injector created
server.policy.linkerd.io/tap-injector-webhook created
authorizationpolicy.policy.linkerd.io/tap-injector created
networkauthentication.policy.linkerd.io/kube-api-server created
service/web created
deployment.apps/web created
serviceprofile.linkerd.io/metrics-api.linkerd-viz.svc.cluster.local created
serviceprofile.linkerd.io/prometheus.linkerd-viz.svc.cluster.local created

Once you’ve installed the extension, let’s validate everything one last time

root@master:~# linkerd check

Lets Access the dashboard

root@master:~# linkerd viz dashboard --address 0.0.0.0
Linkerd dashboard available at:
http://0.0.0.0:50750
Opening Linkerd dashboard in the default browser

enter image description here

enter image description here

Click around, explore

Understanding OpenELB: An Open Source Load Balancer for BareMetal Kubernetes

Understanding OpenELB: An Open Source Load Balancer for BareMetal Kubernetes

Introduction

OpenELB is an exciting addition to the Kubernetes ecosystem, offering a powerful open-source load balancer solution for managing traffic within Kubernetes clusters. In this blog post, we'll dive into the world of OpenELB, exploring its features, architecture, and how it can benefit your Kubernetes deployments.

What is OpenELB? - OpenELB, short for Open Elastic Load Balancer, is a cloud-native, software-defined load balancer designed specifically for Kubernetes environments. It provides advanced load balancing capabilities to distribute incoming traffic across your Kubernetes pods, ensuring high availability, scalability, and reliability for your applications.

- In cloud-based Kubernetes clusters, Services are usually exposed by using load balancers provided by cloud vendors. However, cloud-based load balancers are unavailable in bare-metal environments. OpenELB allows users to create LoadBalancer Services in bare-metal, edge, and virtualization environments for external access, and provides the same user experience as cloud-based load balancers.

Core Features - ECMP routing and load balancing

- BGP mode and Layer 2 mode

- IP address pool management

- BGP configuration using CRDs

Key Features of OpenELB:

Dynamic Load Balancing: OpenELB dynamically routes traffic to healthy pods based on predefined rules and policies, ensuring efficient utilization of resources.

Scalability: With support for horizontal scaling, OpenELB can handle increased traffic loads by automatically adding or removing load balancer instances as needed.

High Availability: OpenELB ensures high availability by automatically detecting and redirecting traffic away from failed or unhealthy pods, minimizing downtime.

Layer 4 and Layer 7 Load Balancing: OpenELB supports both Layer 4 (TCP/UDP) and Layer 7 (HTTP/HTTPS) load balancing, allowing you to optimize traffic routing based on application requirements.

Integration with Kubernetes: OpenELB seamlessly integrates with Kubernetes, leveraging Kubernetes services and resources for easy configuration and management.

Customization and Extensibility: OpenELB provides a flexible architecture that allows for customization and extension through plugins and custom configurations.

Architecture of OpenELB OpenELB follows a modular architecture, consisting of the following components

Controller: - The Controller serves as the brain of the OpenELB system. It manages the configuration, monitoring, and orchestration of load balancer instances.

-It interacts with the Kubernetes API server to gather information about services, endpoints, and pods within the cluster.

-The Controller continuously monitors the health and availability of backend pods and dynamically updates the configuration of load balancer instances based on changes in the cluster.

Load Balancer Instances: -Load Balancer Instances are responsible for receiving incoming traffic and distributing it to backend pods.

-They can be deployed as independent instances or as part of a pool of load balancer nodes, depending on the scale and traffic requirements of the cluster.

-Each load balancer instance runs the necessary software components to perform traffic routing and load balancing, such as a reverse proxy or load balancing algorithm.

Service Discovery: - OpenELB integrates with Kubernetes service discovery mechanisms to dynamically discover backend pods and their associated endpoints.

- It leverages Kubernetes Services to expose applications running in the cluster, allowing them to be accessed through a stable DNS name or IP address.

- Service Discovery ensures that OpenELB always has up-to-date information about the available backend pods and their network addresses.

Health Checking:

- Health Checking is an essential component of OpenELB responsible for monitoring the health and responsiveness of backend pods.

- It periodically sends health checks to backend pods to verify their availability and performance.

- If a backend pod becomes unhealthy or unresponsive, Health Checking notifies the Controller, which takes appropriate action to reroute traffic away from the problematic pod.

Data Store (Optional): - In some deployments, OpenELB may utilize a data store to store configuration information, session state, or other metadata.

- Common data store choices include key-value stores like etcd or distributed databases like Cassandra.

- The Data Store provides a centralized repository for storing and retrieving information critical to the operation of OpenELB.

Install OpenELB on Kubernetes

Prerequisites

You need to prepare a Kubernetes cluster, and ensure that the Kubernetes version is 1.15 or later

OpenELB is designed to be used in bare-metal Kubernetes environments. However, you can also use a cloud-based Kubernetes cluster for learning and testing.

My Home Cluster

root@kmaster:~# kubectl get nodes -o wide
NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kmaster   Ready    control-plane   95m   v1.26.0   172.16.16.100       <none>        Ubuntu 22.04.1 LTS   5.15.0-58-generic   containerd://1.6.28
worker1   Ready    <none>          75m   v1.26.0   172.16.16.101   <none>        Ubuntu 22.04.1 LTS   5.15.0-58-generic   containerd://1.6.28
worker2   Ready    <none>          75m   v1.26.0   172.16.16.102   <none>        Ubuntu 22.04.1 LTS   5.15.0-58-generic   containerd://1.6.28

Install OpenELB Using Helm

Log in to the Kubernetes cluster over SSH and run the following commands

root@master:~# helm repo add kubesphere-stable https://charts.kubesphere.io/stable
"kubesphere-stable" has been added to your repositories

root@master:~# helm repo update

root@master:~# kubectl create ns openelb-system
namespace/openelb-system created

root@master:~# helm install openelb kubesphere-stable/openelb -n openelb-system
NAME: openelb
LAST DEPLOYED: Mon Feb 26 10:15:28 2024
NAMESPACE: openelb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The OpenELB has been installed.

Run the following command to check whether the status of openelb-manager is READY: 1/1 and STATUS: Running. If yes, OpenELB has been installed successfully.

root@master:~# kubectl get po -n openelb-system
NAME                              READY   STATUS      RESTARTS   AGE
openelb-admission-create-b9h89    0/1     Completed   0          113s
openelb-admission-patch-d77jx     0/1     Completed   0          113s
openelb-keepalive-vip-5k5pp       1/1     Running     0          75s
openelb-keepalive-vip-6p6zq       1/1     Running     0          75s
openelb-manager-8c8b9b89c-kdn2l   1/1     Running     0          113s

root@kmaster:~# kubectl get validatingwebhookconfiguration
NAME                WEBHOOKS   AGE
openelb-admission   1          7m47s

root@kmaster:~# kubectl get mutatingwebhookconfigurations
NAME                WEBHOOKS   AGE
openelb-admission   1          7m55s

root@kmaster:~# kubectl get crd |grep kubesphere
bgpconfs.network.kubesphere.io                        2024-02-26T07:02:48Z
bgppeers.network.kubesphere.io                        2024-02-26T07:02:48Z
eips.network.kubesphere.io                            2024-02-26T07:02:48Z

Configuration

We have different use cases to configure OpenELB - Configure IP Address Pools Using Eip

- Configure OpenELB in BGP Mode

- Configure OpenELB for Multi-Router Clusters

- Configure Multiple OpenELB Replicas

I am using EIP to configure OpenELB on the Cluster

Configure IP Address Pools Using Eip

Currently, OpenELB supports only IPv4 and will soon support IPv6.

root@master:~#kubectl edit configmap kube-proxy -n kube-system
......
ipvs:
  strictARP: true
......

root@master:~# kubectl rollout restart daemonset kube-proxy -n kube-system

root@master:~# cat eip.yaml
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
  name: eip-pool
spec:
  address: 172.16.16.190-172.16.16.240
  protocol: layer2
  disable: false
  interface: eth0

root@kmaster:~# kubectl apply -f eip.yaml
eip.network.kubesphere.io/eip-pool configured

Verify the Pool status

root@kmaster:~# kubectl get eip
NAME       CIDR                          USAGE   TOTAL
eip-pool   172.16.16.190-172.16.16.240   1       51

root@kmaster:~# kubectl get eip eip-pool -oyaml
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"network.kubesphere.io/v1alpha2","kind":"Eip","metadata":{"annotations":{},"name":"eip-pool"},"spec":{"address":"172.16.16.190-172.16.16.240","disable":false,"interface":"eth0","protocol":"layer2"}}
  creationTimestamp: "2024-02-26T07:05:46Z"
  finalizers:
  - finalizer.ipam.kubesphere.io/v1alpha1
  generation: 3
  name: eip-pool
  resourceVersion: "4809"
  uid: d2b70a8a-04ae-4b1c-b8bd-d317d9ea67f1
spec:
  address: 172.16.16.190-172.16.16.240
  disable: false
  interface: eth0
  protocol: layer2
status:
  firstIP: 172.16.16.190
  lastIP: 172.16.16.240
  poolSize: 51
  ready: true
  v4: true

Lets Deploy One application and expose it on Loadbalancer

root@kmaster:~# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80

root@kmaster:~# cat svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  annotations:
    lb.kubesphere.io/v1alpha1: openelb  #to specify that the Service uses OpenELB
    protocol.openelb.kubesphere.io/v1alpha1: layer2 #specifies that OpenELB is used in Layer2 mode
    eip.openelb.kubesphere.io/v1alpha2: eip-pool #specifies the Eip object used by OpenELB
spec:
  selector:
    app: nginx
  type: LoadBalancer
  ports:
    - name: http
      port: 80
      targetPort: 80

root@kmaster:~# kubectl apply -f nginx-deploy.yaml

root@kmaster:~# kubectl apply -f svc.yaml

root@kmaster:~# kubectl get po,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-7f456874f4-knvht   1/1     Running   0          16m

NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
service/nginx        LoadBalancer   10.104.174.233   172.16.16.190   80:31282/TCP   15m

Verify that application is accessible on Assigned Load balancer IP

root@kmaster:~# curl 172.16.16.190
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

OpenELB vs MetalLB

MetalLB and OpenELB are both solutions for providing load balancing functionality in Kubernetes environments, but they have different approaches and features. Let's compare them based on various aspects

Architecture:

MetalLB: MetalLB is a Layer 2 and Layer 3 load balancer implementation for Kubernetes clusters. It uses standard routing protocols (e.g., ARP, BGP) to route traffic to services within the cluster. MetalLB operates as a control plane and a set of data plane components that interact with the network infrastructure to manage IP address allocation and load balancing.

OpenELB: OpenELB, or Open Elastic Load Balancer, is a software-defined load balancer designed specifically for Kubernetes environments. It operates as a Kubernetes-native load balancing solution, leveraging Custom Resource Definitions (CRDs) and controllers to manage load balancer resources within the cluster. OpenELB provides advanced load balancing features such as Layer 4 and Layer 7 load balancing, service discovery, and dynamic configuration updates.

Installation and Configuration:

MetalLB: MetalLB can be installed as a Kubernetes controller and speaker components using YAML manifests or Helm charts. It requires configuring address pools and routing protocols to allocate IP addresses and route traffic effectively.

OpenELB: OpenELB can be installed as a Kubernetes controller and speaker components using YAML manifests or Helm charts as well. It allows for configuring IP address pools and load balancing policies using Custom Resource Definitions (CRDs) and YAML configurations.

Features:

MetalLB: MetalLB primarily focuses on providing simple, network-level load balancing capabilities for Kubernetes services. It supports both Layer 2 and Layer 3 load balancing modes and allows for the allocation of IP addresses from specific address pools.

OpenELB: OpenELB offers a more comprehensive set of load balancing features tailored specifically for Kubernetes environments. It supports Layer 4 and Layer 7 load balancing, service discovery, dynamic configuration updates, integration with Kubernetes network policies, and more advanced traffic management capabilities.

Community and Adoption:

MetalLB: MetalLB has gained significant adoption within the Kubernetes community and is widely used for providing load balancing functionality in on-premises and bare-metal Kubernetes deployments.

OpenELB: OpenELB is a newer project compared to MetalLB but has been gaining traction as a Kubernetes-native load balancing solution. It is supported by KubeSphere, an open-source Kubernetes platform, and has been integrated into various Kubernetes distributions and platforms.

Flexibility and Customization:

MetalLB: MetalLB provides basic load balancing functionality with support for customizing IP address allocation and routing configurations. It is suitable for simple use cases and environments where network infrastructure integration is straightforward.

OpenELB: OpenELB offers more flexibility and customization options, allowing users to define complex load balancing policies, integrate with Kubernetes network policies, and implement advanced traffic management features.

In summary, MetalLB and OpenELB are both effective solutions for providing load balancing in Kubernetes environments, but they cater to different use cases and offer varying levels of features and functionality. The choice between MetalLB and OpenELB depends on your specific requirements, preferences, and the complexity of your Kubernetes deployment.

Streamlining Configuration Management with Ansible Semaphore's Intuitive UI

Streamlining Configuration Management with Ansible Semaphore's Intuitive UI

Configuration Management Tool Ansible

enter image description here

Ansible is a powerful configuration management tool that enables organizations to automate the provisioning, configuration, and management of IT infrastructure in a simple, efficient, and consistent manner. It uses a declarative language and follows the Infrastructure as Code (IaC) paradigm, allowing administrators to define infrastructure configurations in human-readable YAML files.

Key features and benefits of Ansible as a configuration management tool include:

Agentless Architecture: Ansible operates in an agentless manner, which means it doesn't require any software to be installed on managed nodes. This simplifies deployment and reduces overhead, making it easy to manage a large number of systems.

Simple and Human-Readable Syntax: Ansible playbooks, written in YAML, are easy to read, write, and understand. This enables infrastructure configurations to be managed efficiently by both system administrators and developers.

Idempotent Execution: Ansible ensures idempotent execution, meaning that running the same playbook multiple times will result in the same desired state, regardless of the initial state of the system. This ensures consistency and predictability in configuration management tasks.

Module-Based Architecture: Ansible provides a wide range of modules that abstract common system administration tasks, such as package management, file manipulation, user management, and service control. These modules can be used to configure various aspects of the infrastructure without the need for custom scripting.

Role-Based Organization: Ansible allows administrators to organize playbooks and tasks into reusable units called roles. Roles promote modularity, code reuse, and maintainability, making it easier to manage complex infrastructure configurations.

Integration and Extensibility: Ansible integrates seamlessly with existing tools, systems, and processes, allowing for easy integration into CI/CD pipelines, monitoring systems, and other automation workflows. Additionally, Ansible's modular architecture makes it extensible, allowing users to develop custom modules and plugins to extend its functionality.

Community and Ecosystem: Ansible benefits from a large and active community of users, contributors, and developers. This vibrant ecosystem provides access to a wealth of pre-built roles, modules, and playbooks through Ansible Galaxy, accelerating development and reducing time to value.

Way of using Ansible.

We can use ansible in 2 ways 1- Control Node/Managed Node 2. Ansible Tower/AWX(UI Based helpful to manage large level organization centrally )

In this blog I will be discussing about 1st way of using ansible where we use to have one control node and from control node we use to push the code to all managed nodes. This way of working with ansible was good for a small organization, but there was a drawback that there was not any history of successful and failed job and admin need to run the playbook login from the control node or using CICD tool.

enter image description here

Now we have a tool called Ansible Semaphore using that we can run our playbook from the UI and can keep track of successful and failed jobs.

Configure Ansible Semaphore on your control node.

Installing Semaphore using Snap

root@master:~# apt install snap

root@master:~# snap install semaphore

root@master:~# snap stop semaphore

Add a admin user to to Login to Semaphore UI

root@master:~# sudo semaphore user add --admin --login amit --name=Amit Chaturvedi --email=test@gmail.com --password=1234

root@master:~# snap start semaphore

Lets Login to Semaphore UI enter image description here

Use the same username and password we created in the above command

We will land to a page where it will ask to create a new project.

enter image description here

Click on new Project and create a new project enter image description here

enter image description here

I am keeping my playbooks into github repo

enter image description here

Lets run the playbook from UI

Step-1 Create KeyStore to store credential to clone git repo.

Click on key store enter image description here

Click on New Key enter image description here

In Username/Password give PAT created in your Github account

Select the type and create enter image description here

Step-2 Create Environment enter image description here

In extra variable I am not passing any variable so keeping it empty enter image description here

Create The Environment enter image description here

Step-3 Create Environment

Click on Create Inventory enter image description here

We have to pass the inventory list like list of managed node

We have 3 types 1) via file 2) static 3) Static yml. I am using file type enter image description here

Click on create Inventory enter image description here

Step4: Create Repository where you have kept the code so Semaphore will clone the repo locally

enter image description here

Provide the details and save it

enter image description here

enter image description here

Step-5: Lets create Task Template

Click on Create New Template enter image description here

Provide the details of your repo, credentials and others

enter image description here

enter image description here

Lets Run the Template and see the magic

Click on Run

enter image description here

If every thing configure properly. Your playbook will run successfully.

enter image description here

Now Click on Dashboard to See the Status

enter image description here

Hope Now you will find how easy to manage ansible using Semaphore