Configuring NGINX Ingress Controller in Bare Metal Kubernetes using MetalLB

Configuring NGINX Ingress Controller in Bare Metal Kubernetes using MetalLB

Configuring NGINX Ingress Controller in Bare Metal Kubernetes using MetalLB

In this guide, we'll walk through the process of setting up the NGINX Ingress Controller in a bare metal Kubernetes cluster using MetalLB. MetalLB provides a load-balancer implementation for environments that do not natively support one, such as bare metal clusters.

Prerequisites

Before you begin, ensure you have the following:

  • A Kubernetes cluster up and running on bare metal nodes. kubectl configured to interact with your cluster. Helm (package manager for Kubernetes) installed.

Step 1: Install MetalLB MetalLB is a load balancer implementation for bare metal Kubernetes clusters. We'll start by installing MetalLB.

root@master:~# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
root@master:~# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml

root@master:~# kubectl get pods -n metallb-system
NAME                                  READY   STATUS    RESTARTS       AGE
metallb-controller-665d96757f-kqlx4   1/1     Running   11 (23h ago)   26h
metallb-speaker-5z4cc                 4/4     Running   18 (23h ago)   4d2h
metallb-speaker-qjrlt                 4/4     Running   22 (23h ago)   4d2h
metallb-speaker-rmnln                 4/4     Running   17 (24h ago)   4d2h

Create IPPool Address

root@master:~# cat metallb-system.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: cheap
  namespace: metallb-system
spec:
  addresses:
  - 172.16.16.150-172.16.16.200

Apply the above in metalb-system namespace

root@master:~# kubectl apply -f metallb-system.yaml

root@master:~# kubectl get IPAddressPool -n metallb-system
NAME    AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
cheap   true          false             ["172.16.16.150-172.16.16.200"]

Step 2: Install NGINX Ingress Controller

1. Add the NGINX Helm repository:

root@master:/home/chicco# helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
"ingress-nginx" already exists with the same configuration, skipping

root@master:/home/chicco# helm search repo nginx
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
ingress-nginx/ingress-nginx     4.10.1          1.10.1          Ingress controller for Kubernetes using NGINX a...

2. Install the NGINX Ingress Controller:

root@master:/home/chicco# kubectl create ns ingress-nginx

root@master:/home/chicco# helm install ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx 
NAME: ingress-nginx
LAST DEPLOYED: Thu Jun 27 12:46:33 2024
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the load balancer IP to be available.
You can watch the status by running 'kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch'

An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

3. Check the NGINX Ingress Controller ingress-nginx namespace

root@master:/home/chicco# kubectl get all -n ingress-nginx
NAME                                           READY   STATUS    RESTARTS   AGE
pod/ingress-nginx-controller-c8f499cfc-q25jq   1/1     Running   0          39s

NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.103.214.79   172.16.16.150   80:31311/TCP,443:32197/TCP   40s
service/ingress-nginx-controller-admission   ClusterIP      10.97.68.133    <none>          443/TCP                      40s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           39s

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-c8f499cfc   1         1         1       39s

We can see a Loadbalancer IP is allocated to ingress controller via Metallb

root@master:/opt/kubernetes# kubectl create secret generic my-docker-secret  --from-file=.dockerconfigjson=$HOME/.docker/config.json   --type=kubernetes.io/dockerconfigjson     -n ingress-test
secret/my-docker-secret created
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: blue-app
  name: blue-app
spec:
  replicas: 1
  selector:
    matchLabels:
      run: blue-app
  template:
    metadata:
      labels:
        run: blue-app
    spec:
      containers:
      - image: omvedi25/main-blue:v1
        name: blue-app
      imagePullSecrets:
      - name: my-docker-secret
-----------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: green-app
  name: green-app
spec:
  replicas: 1
  selector:
    matchLabels:
      run: green-app
  template:
    metadata:
      labels:
        run: green-app
    spec:
      containers:
      - image: omvedi25/main-green:v1
        name: green-app
      imagePullSecrets:
      - name: my-docker-secret
-------------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: red-app
  name: red-app
spec:
  replicas: 1
  selector:
    matchLabels:
      run: red-app
  template:
    metadata:
      labels:
        run: red-app
    spec:
      containers:
      - image: omvedi25/main-red:v1
        name: red-app
      imagePullSecrets:
      - name: my-docker-secret

root@master:/opt/kubernetes# kubectl create ns ingress-test
root@master:/opt/kubernetes# kubectl apply -f nginx-deploy-red.yaml -n ingress-test
deployment.apps/red-app created

root@master:/opt/kubernetes# kubectl apply -f nginx-deploy-blue.yaml -n ingress-test
deployment.apps/blue-app created

root@master:/opt/kubernetes# kubectl apply -f nginx-deploy-green.yaml -n ingress-test
deployment.apps/green-app created

Export the Service

root@master:/opt/kubernetes/# kubectl expose deploy green-aap --port 80 -n ingress-test

root@master:/opt/kubernetes/# kubectl expose deploy red-aap --port 80 -n ingress-test

root@master:/opt/kubernetes/# kubectl expose deploy blue-aap --port 80 -n ingress-test

Verify The Service

root@master:/opt/kubernetes# kubectl get svc -n ingress-test
NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
blue-app         ClusterIP   10.109.208.98    <none>        80/TCP    25h
green-app        ClusterIP   10.110.140.247   <none>        80/TCP    25h
red-app          ClusterIP   10.98.179.2      <none>        80/TCP    25h

Deploy The Ingress Resource

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: ingress-resource
spec:
  ingressClassName: nginx
  rules:
  - host: nginx.arobyte.tech
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: red-app
            port:
              number: 80
      - path: /blue
        pathType: Prefix
        backend:
          service:
            name: blue-app
            port:
              number: 80
      - path: /green
        pathType: Prefix
        backend:
          service:
            name: green-app
            port:
              number: 80
root@master:/opt/kubernetes# kubectl apply -f ingress-resource.yaml

root@master:~# kubectl get ingress -n ingress-test
NAME                 CLASS   HOSTS                ADDRESS         PORTS   AGE
ingress-resource   nginx   nginx.arobyte.tech   172.16.16.150   80      25h

Verify now its working as expected

root@master:~# curl http://nginx.arobyte.tech
<h1><font color=red>Welcome to Ingress Controller</font></h1>

root@master:~# curl http://nginx.arobyte.tech/blue
<h1><font color=blue>Welcome to Ingress Controller</font></h1>

root@master:~# curl http://nginx.arobyte.tech/green
<h1><font color=green>Welcome to Ingress Controller</font></h1>

Conclusion We have successfully configured the NGINX Ingress Controller in your bare metal Kubernetes cluster using MetalLB. This setup allows you to manage and expose your Kubernetes services with ease, providing a robust load balancing solution in a bare metal environment. Feel free to explore more advanced configurations and features of MetalLB and NGINX Ingress Controller to suit your specific needs. Happy Kubernetes-ing!

Enhancing MySQL Security: Implementing HashiCorp Vault for Secure Authentication

Enhancing MySQL Security: Implementing HashiCorp Vault for Secure Authentication

Introduction Securing database credentials is a critical aspect of protecting your data infrastructure. HashiCorp Vault offers a sophisticated solution for managing secrets and sensitive information, providing dynamic access controls to databases. This guide will demonstrate how to integrate HashiCorp Vault with a MySQL database to enhance security through dynamic credential management.

Prerequisites Before proceeding, ensure you have the following:

i) A MySQL database instance up and running. ii) A HashiCorp Vault instance installed and running. iii) Basic familiarity with MySQL and Vault operations. iv) Appropriate permissions to configure both Vault and MySQL.

Install and Configure HashiCorp Vault

root@06432e7f921c:~# apt update;apt install snap;snap install vault
Hit:1 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:3 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu focal-backports InRelease
Hit:4 http://security.ubuntu.com/ubuntu focal-security InRelease

Start Vault Server and Login to Vault

root@06432e7f921c:~# vault server -dev -dev-listen-address=0.0.0.0:8200 &

root@06432e7f921c:~# export VAULT_ADDR='http://0.0.0.0:8200'
root@06432e7f921c:~# vault login
Token (will be hidden): 
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key                  Value
---                  -----
token                hvs.sxxxxxxxxxxxxxxxxxxxxxxxx
token_accessor       2xxxxxxxxxxxxxxxxxxxxxxx
token_duration       ∞
token_renewable      false
token_policies       ["root"]
identity_policies    []
policies             ["root"]

Install MariaDB Database

root@06432e7f922c:/home/cloud_user# apt install mariadb-server mariadb-client
Setting up libconfig-inifiles-perl (3.000002-1) ...
Setting up libcgi-pm-perl (4.46-1) ...
Setting up libhtml-template-perl (2.97-1) ...
Setting up libsnappy1v5:amd64 (1.1.8-1build1) ...
Setting up socat (1.7.3.3-2) ...
Setting up mariadb-server-core-10.3 (1:10.3.39-0ubuntu0.20.04.2) ...
Setting up galera-3 (25.3.29-1) ...
Setting up mariadb-client-core-10.3 (1:10.3.39-0ubuntu0.20.04.2) ...
Setting up libfcgi-perl (0.79-1) ...
Setting up libterm-readkey-perl (2.38-1build1) ...
Setting up libdbi-perl:amd64 (1.643-1ubuntu0.1) ...
Setting up libcgi-fast-perl (1:2.15-1) ...
Setting up mariadb-client-10.3 (1:10.3.39-0ubuntu0.20.04.2) ...
Setting up libdbd-mysql-perl:amd64 (4.050-3ubuntu0.2) ...
Setting up mariadb-server-10.3 (1:10.3.39-0ubuntu0.20.04.2) ...
Created symlink /etc/systemd/system/mysql.service → /lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/mysqld.service → /lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/multi-user.target.wants/mariadb.service → /lib/systemd/system/mariadb.service.
Setting up mariadb-client (1:10.3.39-0ubuntu0.20.04.2) ...
Setting up mariadb-server (1:10.3.39-0ubuntu0.20.04.2) ...
Processing triggers for systemd (245.4-4ubuntu3.23) ...
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for libc-bin (2.31-0ubuntu9.16) ...

root@06432e7f922c:/home/cloud_user# systemctl start mariadb
root@06432e7f922c:/home/cloud_user# systemctl status mariadb
● mariadb.service - MariaDB 10.3.39 database server
     Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2024-06-03 17:35:59 UTC; 4min 24s ago
       Docs: man:mysqld(8)
             https://mariadb.com/kb/en/library/systemd/
   Main PID: 34046 (mysqld)
     Status: "Taking your SQL requests now..."
      Tasks: 31 (limit: 2299)
     Memory: 63.1M
     CGroup: /system.slice/mariadb.service
             └─34046 /usr/sbin/mysqld

Jun 03 17:35:59 06432e7f922c.mylabserver.com systemd[1]: Starting MariaDB 10.3.39 database server...
Jun 03 17:35:59 06432e7f922c.mylabserver.com systemd[1]: Started MariaDB 10.3.39 database server.
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34081]: Upgrading MySQL tables if necessary.
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34084]: Looking for 'mysql' as: /usr/bin/mysql
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34084]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34084]: This installation of MariaDB is already upgraded to 10.3.39-MariaDB.
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34084]: There is no need to run mysql_upgrade again for 10.3.39-MariaDB.
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34084]: You can use --force if you still want to run mysql_upgrade
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34092]: Checking for insecure root accounts.
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34097]: Triggering myisam-recover for all MyISAM tables and aria-recover for all Aria tables

Install Mysql

root@06432e7f922c:/home/cloud_user# mysql_secure_installation 

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
root@06432e7f922c:/home/cloud_user# mysql_secure_installation 

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] Y
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] Y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

Testing the connection

root@06432e7f922c:/home/cloud_user# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 44
Server version: 10.3.39-MariaDB-0ubuntu0.20.04.2 Ubuntu 20.04

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> 

root@06432e7f922c:/home/cloud_user# cat /etc/mysql/mariadb.conf.d/50-server.cnf | grep bind
bind-address            = 0.0.0.0

root@06432e7f922c:/home/cloud_user# systemctl restart mariadb
root@06432e7f922c:/home/cloud_user# systemctl status mariadb
● mariadb.service - MariaDB 10.3.39 database server
     Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2024-06-03 17:56:26 UTC; 6s ago
       Docs: man:mysqld(8)
             https://mariadb.com/kb/en/library/systemd/
    Process: 34815 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS)
    Process: 34816 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
    Process: 34818 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= ||   VAR=`cd /usr/bin/..; /usr/bin/galera_recovery`; [ $? -eq 0 ]   && systemctl set-e>
    Process: 34897 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
    Process: 34899 ExecStartPost=/etc/mysql/debian-start (code=exited, status=0/SUCCESS)
   Main PID: 34866 (mysqld)
     Status: "Taking your SQL requests now..."
      Tasks: 31 (limit: 2299)
     Memory: 62.9M
     CGroup: /system.slice/mariadb.service
             └─34866 /usr/sbin/mysqld

Jun 03 17:56:26 06432e7f922c.mylabserver.com systemd[1]: Starting MariaDB 10.3.39 database server...
Jun 03 17:56:26 06432e7f922c.mylabserver.com systemd[1]: Started MariaDB 10.3.39 database server.
Jun 03 17:56:26 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34901]: Upgrading MySQL tables if necessary.
Jun 03 17:56:26 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34904]: Looking for 'mysql' as: /usr/bin/mysql
Jun 03 17:56:26 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34904]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck
Jun 03 17:56:26 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34904]: This installation of MariaDB is already upgraded to 10.3.39-MariaDB.
Jun 03 17:56:26 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34904]: There is no need to run mysql_upgrade again for 10.3.39-MariaDB.
Jun 03 17:56:26 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34904]: You can use --force if you still want to run mysql_upgrade

root@06432e7f922c:/home/cloud_user# netstat -pant | grep 3306
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      34866/mysqld

Create User, Database and grant permission

root@06432e7f922c:/home/cloud_user# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 36
Server version: 10.3.39-MariaDB-0ubuntu0.20.04.2 Ubuntu 20.04

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create database book_info;
Query OK, 1 row affected (0.000 sec)

MariaDB [(none)]> create user 'vault'@'%' identified by '<your-password>';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> grant all privileges on book_info.* to 'vault'@'%' with grant option; 
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> grant create user on *.* to 'vault'@'%';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.000 sec)

Create Table in Book_info Database and put some data

MariaDB [(none)]> use book_info
Database changed
MariaDB [book_info]> CREATE TABLE book_info (
    ->     id INT AUTO_INCREMENT PRIMARY KEY,
    ->     bookname VARCHAR(255) NOT NULL,
    ->     authorname VARCHAR(255) NOT NULL
    -> );
Query OK, 0 rows affected (0.078 sec)

MariaDB [book_info]> INSERT INTO book_info (bookname, authorname) VALUES
    -> ('To Kill a Mockingbird', 'Harper Lee'),
    -> ('1984', 'George Orwell'),
    -> ('Pride and Prejudice', 'Jane Austen'),
    -> ('The Great Gatsby', 'F. Scott Fitzgerald'),
    -> ('Moby Dick', 'Herman Melville');
Query OK, 5 rows affected (0.016 sec)
Records: 5  Duplicates: 0  Warnings: 0

MariaDB [book_info]> SELECT * FROM book_info;
+----+-----------------------+---------------------+
| id | bookname              | authorname          |
+----+-----------------------+---------------------+
|  1 | To Kill a Mockingbird | Harper Lee          |
|  2 | 1984                  | George Orwell       |
|  3 | Pride and Prejudice   | Jane Austen         |
|  4 | The Great Gatsby      | F. Scott Fitzgerald |
|  5 | Moby Dick             | Herman Melville     |
+----+-----------------------+---------------------+
5 rows in set (0.000 sec)

Enable Vault for Database

root@06432e7f921c:~# vault secrets enable database
2024-06-03T18:17:42.947Z [INFO]  secrets.database.database_b9224b4f: initializing database rotation queue
2024-06-03T18:17:42.949Z [INFO]  core: successful mount: namespace="" path=database/ type=database version="v1.16.2+builtin.vault"
Success! Enabled the database secrets engine at: database/
root@06432e7f921c:~# 2024-06-03T18:17:42.965Z [INFO]  secrets.database.database_b9224b4f: populating role rotation queue
2024-06-03T18:17:42.965Z [INFO]  secrets.database.database_b9224b4f: starting periodic ticker

Create Role in Vault

root@06432e7f921c:~# vault write database/roles/my-role   db_name=book_info   creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}'; GRANT SELECT O
N book_info.book_info TO '{{name}}'@'%';"   default_ttl="1h"   max_ttl="24h"
Success! Data written to: database/roles/my-role

Store Credential in vault and attach the role

root@06432e7f921c:~# vault write database/config/book_info   plugin_name=mysql-database-plugin   connection_url="{{username}}:{{password}}@tcp(172.31.115.254:3306)/"   allowed_roles="my-role"   username="vault" password='<passowd>'
Success! Data written to: database/config/book_info

Try to Generate and read the password

root@06432e7f921c:~# vault read database/creds/my-role
Key                Value
---                -----
lease_id           database/creds/my-role/qmJjwAzsLOuCitMCHT7G1cJW
lease_duration     1h
lease_renewable    true
password           jJiYrzYY8W5Rf-oRq7Bx
username           v-root-my-role-RVu8N2m0O61mxty3O

** Create Policy file and apply**

root@06432e7f921c:~# cat databasepolicy.hcl 
path "database/creds/book_info" {
  capabilities = ["read"]
}

root@06432e7f921c:~# vault policy write dbpolicy databasepolicy.hcl
Success! Uploaded policy: dbpolicy

root@06432e7f921c:~# vault token create -policy=dbpolicy
Key                  Value
---                  -----
token                hvs.CAESILo8RAJSNiiJAigxRihFXMc9huhYL1zYdkgBl-Pgqo7XGh4KHGh2cy51cHRGY0xhWTBkN2tEMkZ3Y2R5VWw3S0k
token_accessor       HOT77qOXRXUBM9PrLVAcmRCW
token_duration       768h
token_renewable      true
token_policies       ["dbpolicy" "default"]
identity_policies    []
policies             ["dbpolicy" "default"]

Access DB From Client

root@06432e7f923c:/home/cloud_user# mysql -u v-root-my-role-r4yGJElGGzndCcb6N -pmZSQLnhLpy4r-BWcn5-r -h 172.31.115.254
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
MariaDB [(none)]> 

We are able to login to Database using Vault Creds

Strengthening User Authentication: A Guide to Implementing HashiCorp Vault To Authenticate Users On Your Website

Strengthening User Authentication: A Guide to Implementing HashiCorp Vault To Authenticate Users On Your Website

In today's digital age, securing user authentication on websites is paramount. With cyber threats looming large, businesses need robust solutions to protect sensitive user data. HashiCorp Vault emerges as a potent tool in this realm, offering a secure and scalable platform for managing secrets and protecting cryptographic assets. In this blog, we'll delve into the significance of user authentication and explore how HashiCorp Vault can fortify your website's security infrastructure.

Understanding the Importance of User Authentication User authentication serves as the first line of defense against unauthorized access to web resources. Whether it's accessing confidential information, making transactions, or simply logging into an account, users entrust websites with their credentials. Consequently, any compromise in authentication mechanisms can lead to severe repercussions, including data breaches, financial losses, and damage to reputation.

Introducing HashiCorp Vault HashiCorp Vault stands out as a comprehensive solution for secret management and data protection. It offers a centralized repository for securely storing and accessing sensitive information such as API keys, passwords, and encryption keys. Vault employs robust encryption and access control mechanisms to safeguard secrets from unauthorized access, ensuring compliance with stringent security standards.

Implementing Authentication with Vault

Secure Secret Storage: Configure Vault to securely store user authentication secrets using its Key-Value and transit secrets engines. Employ encryption to safeguard data in transit and at rest.

Dynamic Secrets: Leverage Vault's dynamic secrets to generate short-lived, dynamically created credentials for user authentication. Implement RBAC to enforce granular access control.

Tokenization and Authentication Methods: Utilize Vault's authentication methods, like LDAP, OAuth, or JWT, to authenticate users against external identity providers. Integrate MFA for added security.

Auditing and Monitoring: Enable Vault's auditing and monitoring features to track authentication activities. Log authentication requests and access attempts for proactive threat detection.

High Availability and Disaster Recovery: Ensure Vault's high availability across multiple data centers or cloud regions. Implement automated backups and disaster recovery procedures for uninterrupted access.

In this blog, we'll explore the process of authenticating users from a website to HashiCorp Vault. Securing user authentication is crucial for safeguarding sensitive data, and HashiCorp Vault offers a robust solution for managing secrets securely. We'll delve into the significance of user authentication and how implementing HashiCorp Vault can bolster your website's security infrastructure.

Install Vault On Your Server

root@master:~# wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install vault
--2024-04-19 12:52:47--  https://apt.releases.hashicorp.com/gpg
Resolving apt.releases.hashicorp.com (apt.releases.hashicorp.com)... 18.164.144.19, 18.164.144.67, 18.164.144.105, ...
Connecting to apt.releases.hashicorp.com (apt.releases.hashicorp.com)|18.164.144.19|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3980 (3.9K) [binary/octet-stream]
Saving to: ‘STDOUT’

-                                                           100%[========================================================================================================================================>]   3.89K

2024-04-19 12:52:52 (1.54 MB/s) - written to stdout [3980/3980]

deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com mantic main
Get:1 https://download.docker.com/linux/ubuntu mantic InRelease [48.8 kB]
Get:2 https://apt.releases.hashicorp.com mantic InRelease [12.9 kB]
Ign:4 https://aquasecurity.github.io/trivy-repo/deb mantic InRelease
Get:5 https://download.docker.com/linux/ubuntu mantic/stable amd64 Packages [11.5 kB]

Start Vault Server

root@master:~# vault server -dev -dev-listen-address=0.0.0.0:8200 &
[1] 7382
root@master:~# ==> Vault server configuration:

             Api Address: http://0.0.0.0:8200
                     Cgo: disabled
         Cluster Address: https://0.0.0.0:8201
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
                 Storage: inmem
                 Version: Vault v1.2.3

WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.

You may need to set the following environment variable:

    $ export VAULT_ADDR='http://0.0.0.0:8200'

The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.

Unseal Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Root Token: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Development mode should NOT be used in production installations!

Login to Vault

root@master:~# export VAULT_ADDR='http://0.0.0.0:8200'
root@master:~# vault login
Token (will be hidden):
WARNING! The VAULT_TOKEN environment variable is set! This takes precedence
over the value set by this command. To use the value set by this command,
unset the VAULT_TOKEN environment variable or set it to the token displayed
below.

Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key                  Value
---                  -----
token                xxxxxxxxxxxxxxxxxxxxxxxxx
token_accessor       jXl034q6KU0r4fuWQR32MEcu
token_duration       ∞
token_renewable      false
token_policies       ["root"]
identity_policies    []
policies             ["root"]

Create Secret

root@master:~# vault secrets enable -path=web-auth kv
2024-04-19T14:17:44.069+0530 [INFO]  core: successful mount: namespace= path=web-auth/ type=kv
Success! Enabled the kv secrets engine at: web-auth/

Create Password for User

root@master:~# echo "amit@test.com" | sha256sum -

Push The User and Cred in the Vault and Verify

root@master:~# curl -H "X-Vault-Token: xxxxxxxxxxxxxxx" --request POST -d '{"amit@test.com":"f9ac24d8c6d410619017ccca98ca4544b05a74ef48c29335a958ad25dd642ad6"}' http://192.168.0.114:8200/v1/web-auth/creds

root@master:~# curl -H "X-Vault-Token: xxxxxxxxxxxxxxxxxxx" http://192.168.0.114:8200/v1/web-auth/creds | jq
{
  "request_id": "ff9d26fc-6abd-b585-b908-2535c4525c1c",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 2764800,
  "data": {
    "amit@test.com": "f9ac24d8c6d410619017ccca98ca4544b05a74ef48c29335a958ad25dd642ad6"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

enter image description here Create and apply policy

root@master:~# cat policy.hcl
path "web-auth/creds" {
   capabilities = ["read"]
}

root@master:~# vault policy write web-policy policy.hcl
Success! Uploaded policy: web-policy

Create Token against policy and use token to access credentials

root@master:~# vault token create -policy="web-policy" -format=json | jq -r ".auth.client_token" > read_token
root@master:~# cat read_token
s.QGR5JsrF7toZcOGj4rKClcvT

root@master:~# curl -H "X-Vault-Token: s.QGR5JsrF7toZcOGj4rKClcvT" http://192.168.0.114:8200/v1/web-auth/creds | jq
{
  "request_id": "4761279f-d3e8-3451-64d3-b17e5b5b8be7",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 2764800,
  "data": {
    "amit@test.com": "f9ac24d8c6d410619017ccca98ca4544b05a74ef48c29335a958ad25dd642ad6"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Now Lets Create Sample Web Application Using Flask

root@master:~# pip install flask
root@master:~# mkdir -p website/templates;cd website
root@master:~/website# cat view.py
from flask import Flask, render_template, request, jsonify
import hashlib
import requests
import json

app = Flask(__name__)

def get_hashed_vault_creds():
    url = "http://0.0.0.0:8200/v1/web-auth/creds"
    headers = { 'X-Vault-Token' : 's.QGR5JsrF7toZcOGj4rKClcvT' }

    response = requests.get(url, headers=headers)
    my_json = response.json()
    creds = []
    for key, value in my_json['data'].items():
        creds.append(key)
        creds.append(value)
    return creds

def acg_login_view(email, password):
    print("login view called")
    credsHash = hashlib.sha256(password.encode()+ b'\n').hexdigest()
    vault_hash = get_hashed_vault_creds()[1]
    if vault_hash == credsHash:
        return True
    else:
        return False

@app.route('/', methods=['GET', 'POST'])
def login():
    if request.method == 'POST':
        email = request.form['email']
        password = request.form['password']
        if acg_login_view(email, password):
            return jsonify({'message': 'Login successful'}), 200
        else:
            return jsonify({'message': 'Login failed'}), 401

    return render_template('login.html')

if __name__ == '__main__':
    app.run(debug=True)


Create Html Page

root@master:~/website# cat templates/login.html
from flask import Flask, render_template, request, jsonify
import hashlib
import requests
import json

app = Flask(__name__)

def get_hashed_vault_creds():
    url = "http://0.0.0.0:8200/v1/web-auth/creds"
    headers = { 'X-Vault-Token' : 's.QGR5JsrF7toZcOGj4rKClcvT' }

    response = requests.get(url, headers=headers)
    my_json = response.json()
    creds = []
    for key, value in my_json['data'].items():
        creds.append(key)
        creds.append(value)
    return creds

def acg_login_view(email, password):
    print("login view called")
    credsHash = hashlib.sha256(password.encode()+ b'\n').hexdigest()
    vault_hash = get_hashed_vault_creds()[1]
    if vault_hash == credsHash:
        return True
    else:
        return False

@app.route('/', methods=['GET', 'POST'])
def login():
    if request.method == 'POST':
        email = request.form['email']
        password = request.form['password']
        if acg_login_view(email, password):
            return jsonify({'message': 'Login successful'}), 200
        else:
            return jsonify({'message': 'Login failed'}), 401

    return render_template('login.html')

if __name__ == '__main__':
    app.run(debug=True)

root@master:~/website# cat templates/login.html
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Login</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            background-color: #f7f7f7;
            margin: 0;
            padding: 0;
            display: flex;
            justify-content: center;
            align-items: center;
            height: 100vh;
        }

        .login-container {
            background-color: #fff;
            padding: 20px;
            border-radius: 10px;
            box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
            width: 300px;
        }

        .login-container h2 {
            margin-top: 0;
            text-align: center;
            color: #333;
        }

        .login-form label {
            font-weight: bold;
            color: #555;
        }

        .login-form input[type="email"],
        .login-form input[type="password"] {
            width: 100%;
            padding: 10px;
            margin-bottom: 15px;
            border: 1px solid #ccc;
            border-radius: 5px;
            box-sizing: border-box;
            font-size: 16px;
        }

        .login-form input[type="submit"] {
            width: 100%;
            padding: 10px;
            border: none;
            border-radius: 5px;
            background-color: #007bff;
            color: #fff;
            cursor: pointer;
            font-size: 16px;
            transition: background-color 0.3s ease;
        }

        .login-form input[type="submit"]:hover {
            background-color: #0056b3;
        }
    </style>
</head>
<body>
    <div class="login-container">
        <h2>Login</h2>
        <form class="login-form" method="POST">
            <label for="email">Email:</label><br>
            <input type="email" id="email" name="email" required><br><br>
            <label for="password">Password:</label><br>
            <input type="password" id="password" name="password" required><br><br>
            <input type="submit" value="Login">
        </form>
    </div>
</body>
</html>

Start the Server

root@master:~/website# flask run --host=0.0.0.0 --port=5000
 * Serving Flask app 'view.py'
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:5000
 * Running on http://192.168.0.114:5000
Press CTRL+C to quit

Access the Page

enter image description here

Lets Try To Login With Wrong Password enter image description here enter image description here

Lets Try To Login With Correct Password enter image description here enter image description here

Conclusion Above steps are one of the way, we can integrate Vault securely into our web service for authentication, ensuring that sensitive credentials are protected and authentication is performed reliably.

Streamlining Kubernetes Configuration: A GitLab CI/CD Guide with Config-lint Validation

Streamlining Kubernetes Configuration: A GitLab CI/CD Guide with Config-lint Validation

Introduction

Config-lint is a powerful command-line tool designed to streamline the validation of Kubernetes configuration files. By leveraging rules specified in YAML, Config-lint ensures adherence to best practices, security standards, and custom policies. This tool is indispensable for integrating into Continuous Integration and Continuous Deployment (CI/CD) pipelines, allowing seamless validation of configuration changes before deployment.

Integrating Config-lint into CI/CD Pipelines

One of the key benefits of Config-lint is its seamless integration into CI/CD pipelines. By incorporating Config-lint as a step in your pipeline, you can automatically validate Kubernetes configuration files before deployment. This ensures that only compliant configurations are promoted to production environments, reducing the risk of misconfigurations and potential downtime.

Custom Rules with YAML

Config-lint allows users to define custom rules using YAML configuration files. This flexibility enables organizations to enforce specific standards and policies tailored to their environment. Whether it's enforcing naming conventions, resource limits, or security policies, Config-lint's YAML-based rules empower teams to maintain consistency and compliance across Kubernetes configurations.

Validating Helm Charts

In addition to standalone configuration files, Config-lint can also validate Helm charts. Helm is a popular package manager for Kubernetes, and ensuring the integrity of Helm charts is crucial for smooth deployments. With Config-lint, teams can validate Helm charts against predefined rules, ensuring that charts adhere to best practices and organizational standards.

Config-lint simplifies Kubernetes configuration validation by providing a flexible and intuitive toolset. By integrating Config-lint into CI/CD pipelines and leveraging custom YAML rules, organizations can ensure the reliability, security, and compliance of their Kubernetes deployments. With support for Helm charts validation, Config-lint offers a comprehensive solution for maintaining consistency and best practices across Kubernetes environments. Start using Config-lint today to streamline your Kubernetes configuration validation process and elevate your CI/CD workflows to the next level of efficiency and reliability.

Integrating Config-lint into Giltab CICD

1) Docker File for Creating Image which I Will use in Pipeline

root@master:~# cat Dockerfile
FROM ubuntu:latest
MAINTAINER omvedi25@gmail.com
ADD config-lint /usr/local/bin/
ADD helm /usr/local/bin/

2- Build the image and push it to Artifact

root@master:~# docker build -t omvedi25/config-lint:v1.1 .

root@master:~# docker push omvedi25/config-lint:v1.1

3- Create a gitlab-ci.yaml pipeline

enter image description here

4- Create a project for helm Chart

enter image description here

5- Lets Create rules.yaml file putting rules into it which will get validate before pushing the chart to chartmuseum.

version: 1
description: Rules for Kubernetes spec files
type: Kubernetes
files:
  - "*.yaml"
rules:
  - id: POD_RESOURCE_REQUESTS_LIMITS
    severity: FAILURE
    message: Containers in Pod must specify both resource requests and limits
    resource: Pod
    assertions:
      - key: spec.containers[*].resources.requests
        op: notPresent
      - key: spec.containers[*].resources.limits
        op: notPresent
    match: any
    tags:
      - pod

  - id: DEPLOYMENT_RESOURCE_REQUESTS_LIMITS
    severity: FAILURE
    message: Containers in Deployment must specify both resource requests and limits
    resource: Deployment
    assertions:
      - key: spec.template.spec.containers[*].resources.requests
        op: notPresent
      - key: spec.template.spec.containers[*].resources.limits
        op: notPresent
    match: all
    tags:
      - deployment

The above rule will check in the chart for deployment that resource request and limits are mentioned or not.

6- Create gitlab-ci.yaml to run the validation on the charts

---
include:
  - project: 'guilds1/cloud-native-guild/helm/tooling/helm-pipelines'
    file: '/.config-lint.yaml'
    ref: main
  - project: 'guilds1/cloud-native-guild/helm/tooling/helm-pipelines'
    file: '/.helm.yaml'
  - project: 'guilds1/cloud-native-guild/helm/tooling/helm-pipelines'
    file: '/.kind.yaml'
    ref: main

variables:
  CHART: ${CI_PROJECT_NAME}
  IMAGE_HELM_CHART_LINT: "quay.io/helmpack/chart-testing:v3.3.1"
  IMAGE_TEST_DOCS: "renaultdigital/helm-docs:v1.5.0"

stages:
  - pretest
  - validation
  - lint
  - test
  - build
  - make_release
  - publish
  - integration

7- Let's run the pipeline and validate the rules. enter image description here

**We can see rules are working as expected. We can write our custom rules according to requirement to validate the charts with mandatory options **

Other Examples

# wget https://github.com/stelligent/config-lint/blob/master/example-files/rules/kubernetes.yml

# cat kubernetes.yml

ChartMuseum: Simplifying Helm Chart Management

ChartMuseum: Simplifying Helm Chart Management

ChartMuseum is an opensource Helm Chart Repository written in Go Language with support for cloud storage backends, including Google Cloud Storage, Amazon S3, Microsoft Azure Blob Storage, Alibaba Cloud OSS Storage, Openstack Object Storage, Oracle Cloud Infrastructure Object Storage, Baidu Cloud BOS Storage, Tencent Cloud Object Storage, DigitalOcean Spaces, Minio, and etcd.

Prerequisites

  • Helm v3.0.0+
  • A persistent storage resource and RW access to it
  • Kubernetes StorageClass for dynamic provisioning

Verify Helm Version

root@master:~# helm version
version.BuildInfo{Version:"v3.13.1", GitCommit:"3547a4b5bf5edb5478ce352e18858d8a552a4110", GitTreeState:"clean", GoVersion:"go1.20.8"}

Verify the Default Storage

root@master:~# kubectl get sc
NAME                            PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage (default)   arobyte.tech/nfs   Delete          Immediate           false                  118m

Using Local Storage to configure Chartmuseum

root@master:~# cat values.yaml
env:
  open:
    STORAGE: local
persistence:
  enabled: true
  accessMode: ReadWriteOnce
  size: 8Gi
  storageClass: "managed-nfs-storage"

Installation

  • Add repository
root@master:~# helm repo add chartmuseum https://chartmuseum.github.io/charts
  • Install chart (Helm v3)
root@master:~# kubectl create ns chartmuseum
root@master:~# helm install -f values.yaml chartmuseum chartmuseum/chartmuseum  --set env.open.DISABLE_API=false -n chartmuseum

Edit the svc to convert the type to LoadBalancer

root@master:~# kubectl edit svc chartmuseum  -n chartmuseum

    app.kubernetes.io/instance: chartmuseum
    app.kubernetes.io/name: chartmuseum
  sessionAffinity: None
  type: ClusterIP ---> LoadBalancer

Verify The Chartmuseum is installed successfully

root@master:~# kubectl get pods,svc -n chartmuseum
NAME                               READY   STATUS    RESTARTS   AGE
pod/chartmuseum-6f5dcc8787-7r9lm   1/1     Running   0          54m

NAME                  TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)          AGE
service/chartmuseum   LoadBalancer   10.109.118.6   172.16.16.151   8080:32563/TCP   54m

I am using HA Proxy for getting traffic routed to my cluster

root@master:~# cat /etc/haproxy/haproxy.cfg
frontend chartmuseum
    bind 192.168.0.114:6201
    mode tcp
    default_backend chartmuseum

backend chartmuseum
    mode tcp
    balance roundrobin
    server backend_server 172.16.16.151:8080 check

root@master:~# systemctl restart haproxy

Login to Browser and verify the URL

enter image description here

Install the helm-cm plugin to push charts to chartmuseum.

root@master:~# helm create mychart
root@master:~# cd mychart/
root@master:~/mychart# helm package .
Successfully packaged chart and saved it to: /root/mychart/mychart-0.1.0.tgz

root@master:~/mychart# helm cm-push mychart-0.1.0.tgz http://chartmuseum.arobyte.tech:6201
"chartmuseum-1" has been added to your repositories

Now Access the Browser the verify the Chart enter image description here

Add the repo locally and search for it

root@master:~# helm repo add chartmuseum-local http://chartmuseum.arobyte.tech:6201
"chartmuseum-local" has been added to your repositories
root@master:~# helm search repo chartmuseum-local
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
chartmuseum-local/mychart   0.1.0           1.16.0          A Helm chart for Kubernetes

root@master:~# helm pull chartmuseum-local/mychart
root@master:~# ls -l | grep mychart
-rw-r--r--  1 root root     3957 Feb 29 11:59 mychart-0.1.0.tgz

We can see that we are able to push and pull the charts to and from our local charts.

Understanding OpenELB: An Open Source Load Balancer for BareMetal Kubernetes

Understanding OpenELB: An Open Source Load Balancer for BareMetal Kubernetes

Introduction

OpenELB is an exciting addition to the Kubernetes ecosystem, offering a powerful open-source load balancer solution for managing traffic within Kubernetes clusters. In this blog post, we'll dive into the world of OpenELB, exploring its features, architecture, and how it can benefit your Kubernetes deployments.

What is OpenELB? - OpenELB, short for Open Elastic Load Balancer, is a cloud-native, software-defined load balancer designed specifically for Kubernetes environments. It provides advanced load balancing capabilities to distribute incoming traffic across your Kubernetes pods, ensuring high availability, scalability, and reliability for your applications.

- In cloud-based Kubernetes clusters, Services are usually exposed by using load balancers provided by cloud vendors. However, cloud-based load balancers are unavailable in bare-metal environments. OpenELB allows users to create LoadBalancer Services in bare-metal, edge, and virtualization environments for external access, and provides the same user experience as cloud-based load balancers.

Core Features - ECMP routing and load balancing

- BGP mode and Layer 2 mode

- IP address pool management

- BGP configuration using CRDs

Key Features of OpenELB:

Dynamic Load Balancing: OpenELB dynamically routes traffic to healthy pods based on predefined rules and policies, ensuring efficient utilization of resources.

Scalability: With support for horizontal scaling, OpenELB can handle increased traffic loads by automatically adding or removing load balancer instances as needed.

High Availability: OpenELB ensures high availability by automatically detecting and redirecting traffic away from failed or unhealthy pods, minimizing downtime.

Layer 4 and Layer 7 Load Balancing: OpenELB supports both Layer 4 (TCP/UDP) and Layer 7 (HTTP/HTTPS) load balancing, allowing you to optimize traffic routing based on application requirements.

Integration with Kubernetes: OpenELB seamlessly integrates with Kubernetes, leveraging Kubernetes services and resources for easy configuration and management.

Customization and Extensibility: OpenELB provides a flexible architecture that allows for customization and extension through plugins and custom configurations.

Architecture of OpenELB OpenELB follows a modular architecture, consisting of the following components

Controller: - The Controller serves as the brain of the OpenELB system. It manages the configuration, monitoring, and orchestration of load balancer instances.

-It interacts with the Kubernetes API server to gather information about services, endpoints, and pods within the cluster.

-The Controller continuously monitors the health and availability of backend pods and dynamically updates the configuration of load balancer instances based on changes in the cluster.

Load Balancer Instances: -Load Balancer Instances are responsible for receiving incoming traffic and distributing it to backend pods.

-They can be deployed as independent instances or as part of a pool of load balancer nodes, depending on the scale and traffic requirements of the cluster.

-Each load balancer instance runs the necessary software components to perform traffic routing and load balancing, such as a reverse proxy or load balancing algorithm.

Service Discovery: - OpenELB integrates with Kubernetes service discovery mechanisms to dynamically discover backend pods and their associated endpoints.

- It leverages Kubernetes Services to expose applications running in the cluster, allowing them to be accessed through a stable DNS name or IP address.

- Service Discovery ensures that OpenELB always has up-to-date information about the available backend pods and their network addresses.

Health Checking:

- Health Checking is an essential component of OpenELB responsible for monitoring the health and responsiveness of backend pods.

- It periodically sends health checks to backend pods to verify their availability and performance.

- If a backend pod becomes unhealthy or unresponsive, Health Checking notifies the Controller, which takes appropriate action to reroute traffic away from the problematic pod.

Data Store (Optional): - In some deployments, OpenELB may utilize a data store to store configuration information, session state, or other metadata.

- Common data store choices include key-value stores like etcd or distributed databases like Cassandra.

- The Data Store provides a centralized repository for storing and retrieving information critical to the operation of OpenELB.

Install OpenELB on Kubernetes

Prerequisites

You need to prepare a Kubernetes cluster, and ensure that the Kubernetes version is 1.15 or later

OpenELB is designed to be used in bare-metal Kubernetes environments. However, you can also use a cloud-based Kubernetes cluster for learning and testing.

My Home Cluster

root@kmaster:~# kubectl get nodes -o wide
NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kmaster   Ready    control-plane   95m   v1.26.0   172.16.16.100       <none>        Ubuntu 22.04.1 LTS   5.15.0-58-generic   containerd://1.6.28
worker1   Ready    <none>          75m   v1.26.0   172.16.16.101   <none>        Ubuntu 22.04.1 LTS   5.15.0-58-generic   containerd://1.6.28
worker2   Ready    <none>          75m   v1.26.0   172.16.16.102   <none>        Ubuntu 22.04.1 LTS   5.15.0-58-generic   containerd://1.6.28

Install OpenELB Using Helm

Log in to the Kubernetes cluster over SSH and run the following commands

root@master:~# helm repo add kubesphere-stable https://charts.kubesphere.io/stable
"kubesphere-stable" has been added to your repositories

root@master:~# helm repo update

root@master:~# kubectl create ns openelb-system
namespace/openelb-system created

root@master:~# helm install openelb kubesphere-stable/openelb -n openelb-system
NAME: openelb
LAST DEPLOYED: Mon Feb 26 10:15:28 2024
NAMESPACE: openelb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The OpenELB has been installed.

Run the following command to check whether the status of openelb-manager is READY: 1/1 and STATUS: Running. If yes, OpenELB has been installed successfully.

root@master:~# kubectl get po -n openelb-system
NAME                              READY   STATUS      RESTARTS   AGE
openelb-admission-create-b9h89    0/1     Completed   0          113s
openelb-admission-patch-d77jx     0/1     Completed   0          113s
openelb-keepalive-vip-5k5pp       1/1     Running     0          75s
openelb-keepalive-vip-6p6zq       1/1     Running     0          75s
openelb-manager-8c8b9b89c-kdn2l   1/1     Running     0          113s

root@kmaster:~# kubectl get validatingwebhookconfiguration
NAME                WEBHOOKS   AGE
openelb-admission   1          7m47s

root@kmaster:~# kubectl get mutatingwebhookconfigurations
NAME                WEBHOOKS   AGE
openelb-admission   1          7m55s

root@kmaster:~# kubectl get crd |grep kubesphere
bgpconfs.network.kubesphere.io                        2024-02-26T07:02:48Z
bgppeers.network.kubesphere.io                        2024-02-26T07:02:48Z
eips.network.kubesphere.io                            2024-02-26T07:02:48Z

Configuration

We have different use cases to configure OpenELB - Configure IP Address Pools Using Eip

- Configure OpenELB in BGP Mode

- Configure OpenELB for Multi-Router Clusters

- Configure Multiple OpenELB Replicas

I am using EIP to configure OpenELB on the Cluster

Configure IP Address Pools Using Eip

Currently, OpenELB supports only IPv4 and will soon support IPv6.

root@master:~#kubectl edit configmap kube-proxy -n kube-system
......
ipvs:
  strictARP: true
......

root@master:~# kubectl rollout restart daemonset kube-proxy -n kube-system

root@master:~# cat eip.yaml
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
  name: eip-pool
spec:
  address: 172.16.16.190-172.16.16.240
  protocol: layer2
  disable: false
  interface: eth0

root@kmaster:~# kubectl apply -f eip.yaml
eip.network.kubesphere.io/eip-pool configured

Verify the Pool status

root@kmaster:~# kubectl get eip
NAME       CIDR                          USAGE   TOTAL
eip-pool   172.16.16.190-172.16.16.240   1       51

root@kmaster:~# kubectl get eip eip-pool -oyaml
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"network.kubesphere.io/v1alpha2","kind":"Eip","metadata":{"annotations":{},"name":"eip-pool"},"spec":{"address":"172.16.16.190-172.16.16.240","disable":false,"interface":"eth0","protocol":"layer2"}}
  creationTimestamp: "2024-02-26T07:05:46Z"
  finalizers:
  - finalizer.ipam.kubesphere.io/v1alpha1
  generation: 3
  name: eip-pool
  resourceVersion: "4809"
  uid: d2b70a8a-04ae-4b1c-b8bd-d317d9ea67f1
spec:
  address: 172.16.16.190-172.16.16.240
  disable: false
  interface: eth0
  protocol: layer2
status:
  firstIP: 172.16.16.190
  lastIP: 172.16.16.240
  poolSize: 51
  ready: true
  v4: true

Lets Deploy One application and expose it on Loadbalancer

root@kmaster:~# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80

root@kmaster:~# cat svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  annotations:
    lb.kubesphere.io/v1alpha1: openelb  #to specify that the Service uses OpenELB
    protocol.openelb.kubesphere.io/v1alpha1: layer2 #specifies that OpenELB is used in Layer2 mode
    eip.openelb.kubesphere.io/v1alpha2: eip-pool #specifies the Eip object used by OpenELB
spec:
  selector:
    app: nginx
  type: LoadBalancer
  ports:
    - name: http
      port: 80
      targetPort: 80

root@kmaster:~# kubectl apply -f nginx-deploy.yaml

root@kmaster:~# kubectl apply -f svc.yaml

root@kmaster:~# kubectl get po,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-7f456874f4-knvht   1/1     Running   0          16m

NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
service/nginx        LoadBalancer   10.104.174.233   172.16.16.190   80:31282/TCP   15m

Verify that application is accessible on Assigned Load balancer IP

root@kmaster:~# curl 172.16.16.190
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

OpenELB vs MetalLB

MetalLB and OpenELB are both solutions for providing load balancing functionality in Kubernetes environments, but they have different approaches and features. Let's compare them based on various aspects

Architecture:

MetalLB: MetalLB is a Layer 2 and Layer 3 load balancer implementation for Kubernetes clusters. It uses standard routing protocols (e.g., ARP, BGP) to route traffic to services within the cluster. MetalLB operates as a control plane and a set of data plane components that interact with the network infrastructure to manage IP address allocation and load balancing.

OpenELB: OpenELB, or Open Elastic Load Balancer, is a software-defined load balancer designed specifically for Kubernetes environments. It operates as a Kubernetes-native load balancing solution, leveraging Custom Resource Definitions (CRDs) and controllers to manage load balancer resources within the cluster. OpenELB provides advanced load balancing features such as Layer 4 and Layer 7 load balancing, service discovery, and dynamic configuration updates.

Installation and Configuration:

MetalLB: MetalLB can be installed as a Kubernetes controller and speaker components using YAML manifests or Helm charts. It requires configuring address pools and routing protocols to allocate IP addresses and route traffic effectively.

OpenELB: OpenELB can be installed as a Kubernetes controller and speaker components using YAML manifests or Helm charts as well. It allows for configuring IP address pools and load balancing policies using Custom Resource Definitions (CRDs) and YAML configurations.

Features:

MetalLB: MetalLB primarily focuses on providing simple, network-level load balancing capabilities for Kubernetes services. It supports both Layer 2 and Layer 3 load balancing modes and allows for the allocation of IP addresses from specific address pools.

OpenELB: OpenELB offers a more comprehensive set of load balancing features tailored specifically for Kubernetes environments. It supports Layer 4 and Layer 7 load balancing, service discovery, dynamic configuration updates, integration with Kubernetes network policies, and more advanced traffic management capabilities.

Community and Adoption:

MetalLB: MetalLB has gained significant adoption within the Kubernetes community and is widely used for providing load balancing functionality in on-premises and bare-metal Kubernetes deployments.

OpenELB: OpenELB is a newer project compared to MetalLB but has been gaining traction as a Kubernetes-native load balancing solution. It is supported by KubeSphere, an open-source Kubernetes platform, and has been integrated into various Kubernetes distributions and platforms.

Flexibility and Customization:

MetalLB: MetalLB provides basic load balancing functionality with support for customizing IP address allocation and routing configurations. It is suitable for simple use cases and environments where network infrastructure integration is straightforward.

OpenELB: OpenELB offers more flexibility and customization options, allowing users to define complex load balancing policies, integrate with Kubernetes network policies, and implement advanced traffic management features.

In summary, MetalLB and OpenELB are both effective solutions for providing load balancing in Kubernetes environments, but they cater to different use cases and offer varying levels of features and functionality. The choice between MetalLB and OpenELB depends on your specific requirements, preferences, and the complexity of your Kubernetes deployment.