Automation Using GitLab Issue Templates with approval process

Automation Using GitLab Issue Templates with approval process

Introduction

GitLab Issue Templates allow teams to standardize and automate issue creation, ensuring that all necessary information is collected upfront and that processes are consistently followed. Automating issue creation using these templates can lead to better collaboration, streamlined workflows, and faster resolution of issues.

1) Kubernetes Pod Restart 2) Kubernetes Deployment Image Patch

What Are GitLab Issue Templates? GitLab issue templates are pre-defined formats for issues that provide a structured way to report bugs, suggest features, or handle tasks. They help ensure that team members include all relevant information when creating issues, reducing miscommunication and the need for follow-up.

Why Use GitLab Issue Templates?

Consistency: Ensure that all issues follow a uniform structure.

Speed: Quickly generate issues by filling in pre-set fields.

Clarity: Include necessary details, such as steps to reproduce bugs, acceptance criteria, and links to relevant documentation.

Automation: Automate the issue creation process by embedding workflows into the template.

Automating Template to Restart Kubernetes Pod and Patch Deployment

Setting Up a Template in Your Repository Issue templates are stored in your repository and can be reused across multiple issues. To create a template, follow these steps:

Navigate to your GitLab repository. Go to the .gitlab or issue_templates folder in the root directory. If the folder doesn't exist, create one and add a new Markdown file (.md) for each issue template. For example, you can name it

k8s_deployment_patch.md

### Environment
environment=

### Namespace
namespace=

### Deployment Name
deployment=

### Container Name
container_name=

### Image Name
image=

### Justification

/label ~"type::issue" ~"action::deployment-patch"

k8s_deployments_restart.md

### Environment
environment=

### Namespace
namespace=

### Deployment Name
deployment=

### Justification

/label ~"type::issue" ~"action::pod-restart"

Merge the Code And Verify

Project --> Issues --> Create New Issue

enter image description here

Lets Create Automation Code(Using Python)

1) Code To List Created Issue

import gitlab
import re
import os
import subprocess
from common import check_namespace_exists, check_deployment_exists, close_issue, approval_verify,extract_info,approver_list



def init_gitlab_connection(private_token):
    return gitlab.Gitlab('https://gitlab.com', private_token=private_token)

def process_issues(issues, project):
    for issue in issues:
        labels = set(issue.labels)
        comment = (f" ๐Ÿค– Its a automated genetated note. Waiting for approval from @{', '.join(approver_list)}")
        notes = issue.notes.list(get_all=True)
        if not notes:  
          issue.notes.create({'body': comment})
        if 'action::pod-restart' in labels:
            namespace, deployment, env, img, cn = extract_info(issue.description)   
            if approval_verify(issue):
                result = subprocess.run(['python', 'deployment_restart.py', namespace, deployment, env], capture_output=True, text=True)
                res = result.returncode
                if res == 0:
                   comment = (f"Deployment '{deployment}' in Namespace '{namespace}' has been restarted ๐Ÿš€")
                else:
                   comment = (f"Deployment '{deployment}' or Namespace '{namespace}' not found ๐Ÿ˜”")    
                close_issue(issue,comment,res)    

        elif 'action::deployment-patch' in labels:
            namespace, deployment, env, img, cn = extract_info(issue.description)
            if approval_verify(issue):
                result = subprocess.run(['python', 'deployment_patch.py', namespace, deployment, env, img, cn], capture_output=True, text=True)
                res = result.returncode
                if res == 0:
                   comment = (f"Deployment '{deployment}' in Namespace '{namespace}' Patched sucessfully with {img} ๐Ÿš€")
                else:
                   comment = (f"Deployment '{deployment}' or Namespace '{namespace}' not found ๐Ÿ˜”")           
                close_issue(issue,comment,res)    


def main():
    private_token = os.getenv('gitlab_token')
    gitlab_project = os.getenv('gitlab_project')
    gl = init_gitlab_connection(private_token)
    project = gl.projects.get(gitlab_project)
    issues = project.issues.list(state='opened')
    if not issues:
        print("No issues found.")
    else:
        process_issues(issues, project)

if __name__ == '__main__':
    main()


2) Code for Common work

from kubernetes import client, config
import argparse
import datetime
import sys
import re
approver_list = ['champ25']

config.load_kube_config()

def close_issue(issue,comment,res):
    notes = issue.notes.list(get_all=True)
    issue.notes.create({'body': comment})
    if res == 0:
       issue.state_event = 'close'
       issue.save()

def approval_verify(issue):
    notes = issue.notes.list(get_all=True)
    for note in notes:
        if 'approved' in note.body.lower() and note.author['username'] in approver_list:
            new_comment =  f"๐Ÿค– Approval has been noted by @{note.author.get('username')}.๐Ÿค–"
            issue.notes.create({'body': new_comment})
            return True
    return False

def extract_info(description):
    namespace_pattern = re.compile(r'namespace=\s*(\S+)')
    deployment_pattern = re.compile(r'deployment=\s*(\S+)')
    env_pattern = re.compile(r'environment=\s*(\S+)')
    cn_pattern = re.compile(r'container_name=\s*(\S+)')
    img_pattern = re.compile(r'image=\s*(\S+)')

    namespace_match = namespace_pattern.search(description)
    deployment_match = deployment_pattern.search(description)
    env_match = env_pattern.search(description)
    cn_match = cn_pattern.search(description)
    img_match = img_pattern.search(description)

    namespace = namespace_match.group(1) if namespace_match else None
    deployment = deployment_match.group(1) if deployment_match else None
    env = env_match.group(1) if env_match else None
    cn = cn_match.group(1) if cn_match else None
    img = img_match.group(1) if img_match else None
    return namespace, deployment, env, img, cn

def check_namespace_exists(namespace_name):
    print(namespace_name)
    v1 = client.CoreV1Api()
    try:
        v1.read_namespace(name=namespace_name)
        return True
    except client.exceptions.ApiException as e:
        if e.status == 404:
            print(f"Namespace '{namespace_name}' does not exist.")
            sys.exit(1)
        else:
            print(f"Error occurred: {e}")
            sys.exit(1)
        return False

def check_deployment_exists(deployment_name, namespace_name):
    apps_v1 = client.AppsV1Api()
    try:
        apps_v1.read_namespaced_deployment(name=deployment_name, namespace=namespace_name)
        return True
    except client.exceptions.ApiException as e:
        if e.status == 404:
            print(f"Deployment '{deployment_name}' does not exist in namespace '{namespace_name}'.")
            sys.exit(1) 
        else:
            print(f"Error occurred: {e}")
            sys.exit(1) 
        return False

3) Code for Restarting Pods

from kubernetes import client, config
import argparse
import datetime
import sys
from common import check_namespace_exists, check_deployment_exists, close_issue, approval_verify,extract_info
def restart_deployment(namespace, deployment_name, env):
    if check_namespace_exists(namespace):
        if check_deployment_exists(deployment_name, namespace):
           api_instance = client.AppsV1Api()
           try:
             patch = {
              "spec": {
                 "template": {
                     "metadata": {
                         "annotations": {
                             "kubectl.kubernetes.io/restartedAt": datetime.datetime.utcnow().isoformat() 
                                  }
                                }
                            }
                        }
                    }
        # Apply the updated deployment
             api_instance.patch_namespaced_deployment(deployment_name, namespace, body=patch)
           except client.exceptions.ApiException as e:
               print(f"Exception when calling AppsV1Api->patch_namespaced_deployment: {e}")
def main():
    # Define the namespace and deployment name
    config.load_kube_config()
    parser = argparse.ArgumentParser()
    parser.add_argument('namespace', type=str, help='Namespace of the deployment')
    parser.add_argument('deployment', type=str, help='Name of the deployment')
    parser.add_argument('env', type=str, help='Cluster Environment')
    args = parser.parse_args()

    # Restart the deployment
    comment = restart_deployment(args.namespace, args.deployment, args.env)

if __name__ == '__main__':
    main()

4) Create Code for Patching the Deployment

from kubernetes import client, config
import argparse
import datetime
import sys
from common import check_namespace_exists, check_deployment_exists, close_issue, approval_verify,extract_info
def patch_deployment(namespace, deployment_name, env, img, cn):
    if check_namespace_exists(namespace):
        if check_deployment_exists(deployment_name, namespace):
           print("Validation Sucessfull")
           api_instance = client.AppsV1Api()
           try:
             patch = {
                 "spec": {
                    "template": {
                       "spec": {
                          "containers": [
                              {
                                "name": cn,
                                "image": img
                             }
                          ]
                        }
                    }
                }
            }
        # Apply the updated deployment
             api_instance.patch_namespaced_deployment(deployment_name, namespace, body=patch)
           except client.exceptions.ApiException as e:
               print(f"Exception when calling AppsV1Api->patch_namespaced_deployment: {e}")
def main():
    # Define the namespace and deployment name
    config.load_kube_config()
    parser = argparse.ArgumentParser()
    parser.add_argument('namespace',help='Namespace of the deployment')
    parser.add_argument('deployment', help='Name of the deployment')
    parser.add_argument('env', help='Cluster Environment')
    parser.add_argument('img', help='Image to be patched')
    parser.add_argument('cn', help='Container Name to be patched')
    args = parser.parse_args()

    # Patch the deployment
    patch_deployment(args.namespace, args.deployment, args.env, args.img, args.cn)

if __name__ == '__main__':
    main()

6) Create Image to Use the above Code

# Use the official Python base image
FROM python:3.9-slim

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1 \
    KUBECONFIG=/src/.kube/config

# Create necessary directories
RUN mkdir -p /src/.kube

# Set the working directory
WORKDIR /src

# Copy Kubernetes config and requirements
COPY config /src/.kube/
COPY requirments.txt /src/

# Debug step: List files to ensure `requirements.txt` is copied
RUN ls -l /src/

# Install Python dependencies
RUN pip install --upgrade pip && \
    pip install --no-cache-dir -r /src/requirments.txt || \
    (echo "Error occurred during pip install" && exit 1)

# Install Kubernetes client (if not already in requirements.txt)
RUN pip install --no-cache-dir kubernetes

requirments.txt file

kubernetes
python-gitlab
requests

Create Image and push it to your Repo.

Create Gitlab-ci.yaml file

######################## Default image #######################

default:
    image: omvedi25/devops-tool:v1.1
############################# Stages ######################################

stages:
 - devops-automation

 ########################### Templates ######################################
devops-automation:
    stage: devops-automation
    script:
      - cd src 
      - python issuelist.py

enter image description here

Now Lets See the Working

Create a new issue to restart the pod

enter image description here

Move to Pipeline and run it

enter image description here

enter image description here

Go to Issue and Verify

enter image description here

Waiting got the approval for the valid requester Add Approved in Comment login to valid user enter image description here

Lets See the Status of Pods enter image description here

Run the Pipeline Again enter image description here

enter image description here

Verify the Pods Status

enter image description here

Lets Patch the Deployment

Create a new Issue enter image description here

Verify the Existing Image assigned to Deployment enter image description here

Trigger the Pipeline

enter image description here

Verify the Pipeline enter image description here

Approve the Pipeline enter image description here

Run the Pipeline again and verify issue enter image description here

Verify the Deployment enter image description here

Note We can create a scheduler to run the pipeline every 5 min enter image description here

Conclusion GitLab Issue Templates are a powerful tool for standardizing and automating issue creation across your team. By incorporating these templates into your workflow and integrating them with automation tools like GitLab CI/CD or monitoring systems, you can improve consistency, save time, and ensure critical issues are addressed promptly.

Connecting Azure DevOps to Azure with Federated Accounts using Managed Identity

Connecting Azure DevOps to Azure with Federated Accounts using Managed Identity

Introduction In today's cloud-centric world, securely managing identities and access control is crucial. Azure DevOps is a powerful tool that helps automate deployment pipelines, but managing credentials securely can be challenging. One way to improve security and simplify management is by using Managed Identity for Azure resources. This blog will guide you through setting up a federated account to connect Azure DevOps to Azure using Service Connections via Managed Identity.

What is Managed Identity? Managed Identity is a feature of Azure Active Directory (Azure AD) that provides Azure services with an automatically managed identity. This identity can be used to authenticate to any service that supports Azure AD authentication, eliminating the need for managing credentials.

Why Use Managed Identity with Azure DevOps?

Security: Eliminates the need to store secrets in Azure DevOps, reducing the risk of credential leakage.

Simplification: Managed identities are automatically managed, so there's no need to rotate or manage keys.

Scalability: Easily manage access across multiple environments and projects without hardcoding credentials.

Step 1: Create a Managed Identity in Azure

Assign a Managed Identity to your Azure resource:

Navigate to the Azure portal. https://portal.azure.com

Go to the resource you want to enable a Managed Identity

enter image description here

Click on Create New enter image description here

Select Resource Group and assign name, review and create enter image description here enter image description here

Click On Manage Identity and click on Access Control, Click on Add. Assign Role enter image description here

enter image description here

enter image description here

Review and Create

enter image description here

Review the access as we have assigned the Manage identity as Owner enter image description here

Login To Azure Devops and move to Your Project to Create Service Connection enter image description here

Click On Project Setting and Select Service Connection enter image description here

Click On New Service Connection enter image description here

enter image description here

Give Connection Name and Description enter image description here

Click on Next and Collect Issuer Details, subject identifier and Keep in Notepad enter image description here

Provide Subscription Details for the Created Manage Identity enter image description here

Login Back to Azure Portal and Select App Registration enter image description here

Create A new app registration and register it enter image description here

Select Certificate $ Secrets and Select Federated Credentials

enter image description here

Select Add Credentials

enter image description here

Provide the Issuer and Subject Identifier Created in Azure Devops Portal Service Connection

enter image description here

enter image description here

Select Created Manage Identity and select Federated Credential and click add credentials

enter image description here

Select Federated Scenario as other and provide Issuer, Subject Identifier and in credential Details name as Federated name created in app registration

enter image description here

Go Back to Azure Devops Portal, Service connection and click on Verify and Save enter image description here

If Configuration Are proper connection will be saved

To verify the connection, lets create azure pipeline in the project

trigger:
- main

pool:
  acloudguru

steps:
- task: AzureCLI@2
  displayName: 'devops-connection'
  inputs:
    azureSubscription: 'devops-connection'  # Use the correct service connection
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: |
      echo "Assigning role......."
      az group create --name myResourceGroup --location eastus

Run the pipeline to deploy resources securely using the Managed Identity enter image description here

Conclusion Using Managed Identity to connect Azure DevOps to Azure resources provides a secure, scalable, and easy-to-manage way of handling authentication. This approach eliminates the need to store and manage secrets, which enhances security and simplifies operations. By leveraging federated identity credentials, you can further streamline and secure your CI/CD pipelines, making your deployments more robust and secure.

Configuring NGINX Ingress Controller in Bare Metal Kubernetes using MetalLB

Configuring NGINX Ingress Controller in Bare Metal Kubernetes using MetalLB

Configuring NGINX Ingress Controller in Bare Metal Kubernetes using MetalLB

In this guide, we'll walk through the process of setting up the NGINX Ingress Controller in a bare metal Kubernetes cluster using MetalLB. MetalLB provides a load-balancer implementation for environments that do not natively support one, such as bare metal clusters.

Prerequisites

Before you begin, ensure you have the following:

  • A Kubernetes cluster up and running on bare metal nodes. kubectl configured to interact with your cluster. Helm (package manager for Kubernetes) installed.

Step 1: Install MetalLB MetalLB is a load balancer implementation for bare metal Kubernetes clusters. We'll start by installing MetalLB.

root@master:~# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
root@master:~# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml

root@master:~# kubectl get pods -n metallb-system
NAME                                  READY   STATUS    RESTARTS       AGE
metallb-controller-665d96757f-kqlx4   1/1     Running   11 (23h ago)   26h
metallb-speaker-5z4cc                 4/4     Running   18 (23h ago)   4d2h
metallb-speaker-qjrlt                 4/4     Running   22 (23h ago)   4d2h
metallb-speaker-rmnln                 4/4     Running   17 (24h ago)   4d2h

Create IPPool Address

root@master:~# cat metallb-system.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: cheap
  namespace: metallb-system
spec:
  addresses:
  - 172.16.16.150-172.16.16.200

Apply the above in metalb-system namespace

root@master:~# kubectl apply -f metallb-system.yaml

root@master:~# kubectl get IPAddressPool -n metallb-system
NAME    AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
cheap   true          false             ["172.16.16.150-172.16.16.200"]

Step 2: Install NGINX Ingress Controller

1. Add the NGINX Helm repository:

root@master:/home/chicco# helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
"ingress-nginx" already exists with the same configuration, skipping

root@master:/home/chicco# helm search repo nginx
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
ingress-nginx/ingress-nginx     4.10.1          1.10.1          Ingress controller for Kubernetes using NGINX a...

2. Install the NGINX Ingress Controller:

root@master:/home/chicco# kubectl create ns ingress-nginx

root@master:/home/chicco# helm install ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx 
NAME: ingress-nginx
LAST DEPLOYED: Thu Jun 27 12:46:33 2024
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the load balancer IP to be available.
You can watch the status by running 'kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch'

An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

3. Check the NGINX Ingress Controller ingress-nginx namespace

root@master:/home/chicco# kubectl get all -n ingress-nginx
NAME                                           READY   STATUS    RESTARTS   AGE
pod/ingress-nginx-controller-c8f499cfc-q25jq   1/1     Running   0          39s

NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.103.214.79   172.16.16.150   80:31311/TCP,443:32197/TCP   40s
service/ingress-nginx-controller-admission   ClusterIP      10.97.68.133    <none>          443/TCP                      40s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           39s

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-c8f499cfc   1         1         1       39s

We can see a Loadbalancer IP is allocated to ingress controller via Metallb

root@master:/opt/kubernetes# kubectl create secret generic my-docker-secret  --from-file=.dockerconfigjson=$HOME/.docker/config.json   --type=kubernetes.io/dockerconfigjson     -n ingress-test
secret/my-docker-secret created
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: blue-app
  name: blue-app
spec:
  replicas: 1
  selector:
    matchLabels:
      run: blue-app
  template:
    metadata:
      labels:
        run: blue-app
    spec:
      containers:
      - image: omvedi25/main-blue:v1
        name: blue-app
      imagePullSecrets:
      - name: my-docker-secret
-----------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: green-app
  name: green-app
spec:
  replicas: 1
  selector:
    matchLabels:
      run: green-app
  template:
    metadata:
      labels:
        run: green-app
    spec:
      containers:
      - image: omvedi25/main-green:v1
        name: green-app
      imagePullSecrets:
      - name: my-docker-secret
-------------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: red-app
  name: red-app
spec:
  replicas: 1
  selector:
    matchLabels:
      run: red-app
  template:
    metadata:
      labels:
        run: red-app
    spec:
      containers:
      - image: omvedi25/main-red:v1
        name: red-app
      imagePullSecrets:
      - name: my-docker-secret

root@master:/opt/kubernetes# kubectl create ns ingress-test
root@master:/opt/kubernetes# kubectl apply -f nginx-deploy-red.yaml -n ingress-test
deployment.apps/red-app created

root@master:/opt/kubernetes# kubectl apply -f nginx-deploy-blue.yaml -n ingress-test
deployment.apps/blue-app created

root@master:/opt/kubernetes# kubectl apply -f nginx-deploy-green.yaml -n ingress-test
deployment.apps/green-app created

Export the Service

root@master:/opt/kubernetes/# kubectl expose deploy green-aap --port 80 -n ingress-test

root@master:/opt/kubernetes/# kubectl expose deploy red-aap --port 80 -n ingress-test

root@master:/opt/kubernetes/# kubectl expose deploy blue-aap --port 80 -n ingress-test

Verify The Service

root@master:/opt/kubernetes# kubectl get svc -n ingress-test
NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
blue-app         ClusterIP   10.109.208.98    <none>        80/TCP    25h
green-app        ClusterIP   10.110.140.247   <none>        80/TCP    25h
red-app          ClusterIP   10.98.179.2      <none>        80/TCP    25h

Deploy The Ingress Resource

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: ingress-resource
spec:
  ingressClassName: nginx
  rules:
  - host: nginx.arobyte.tech
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: red-app
            port:
              number: 80
      - path: /blue
        pathType: Prefix
        backend:
          service:
            name: blue-app
            port:
              number: 80
      - path: /green
        pathType: Prefix
        backend:
          service:
            name: green-app
            port:
              number: 80
root@master:/opt/kubernetes# kubectl apply -f ingress-resource.yaml

root@master:~# kubectl get ingress -n ingress-test
NAME                 CLASS   HOSTS                ADDRESS         PORTS   AGE
ingress-resource   nginx   nginx.arobyte.tech   172.16.16.150   80      25h

Verify now its working as expected

root@master:~# curl http://nginx.arobyte.tech
<h1><font color=red>Welcome to Ingress Controller</font></h1>

root@master:~# curl http://nginx.arobyte.tech/blue
<h1><font color=blue>Welcome to Ingress Controller</font></h1>

root@master:~# curl http://nginx.arobyte.tech/green
<h1><font color=green>Welcome to Ingress Controller</font></h1>

Conclusion We have successfully configured the NGINX Ingress Controller in your bare metal Kubernetes cluster using MetalLB. This setup allows you to manage and expose your Kubernetes services with ease, providing a robust load balancing solution in a bare metal environment. Feel free to explore more advanced configurations and features of MetalLB and NGINX Ingress Controller to suit your specific needs. Happy Kubernetes-ing!

Enhancing MySQL Security: Implementing HashiCorp Vault for Secure Authentication

Enhancing MySQL Security: Implementing HashiCorp Vault for Secure Authentication

Introduction Securing database credentials is a critical aspect of protecting your data infrastructure. HashiCorp Vault offers a sophisticated solution for managing secrets and sensitive information, providing dynamic access controls to databases. This guide will demonstrate how to integrate HashiCorp Vault with a MySQL database to enhance security through dynamic credential management.

Prerequisites Before proceeding, ensure you have the following:

i) A MySQL database instance up and running. ii) A HashiCorp Vault instance installed and running. iii) Basic familiarity with MySQL and Vault operations. iv) Appropriate permissions to configure both Vault and MySQL.

Install and Configure HashiCorp Vault

root@06432e7f921c:~# apt update;apt install snap;snap install vault
Hit:1 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:3 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu focal-backports InRelease
Hit:4 http://security.ubuntu.com/ubuntu focal-security InRelease

Start Vault Server and Login to Vault

root@06432e7f921c:~# vault server -dev -dev-listen-address=0.0.0.0:8200 &

root@06432e7f921c:~# export VAULT_ADDR='http://0.0.0.0:8200'
root@06432e7f921c:~# vault login
Token (will be hidden): 
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key                  Value
---                  -----
token                hvs.sxxxxxxxxxxxxxxxxxxxxxxxx
token_accessor       2xxxxxxxxxxxxxxxxxxxxxxx
token_duration       โˆž
token_renewable      false
token_policies       ["root"]
identity_policies    []
policies             ["root"]

Install MariaDB Database

root@06432e7f922c:/home/cloud_user# apt install mariadb-server mariadb-client
Setting up libconfig-inifiles-perl (3.000002-1) ...
Setting up libcgi-pm-perl (4.46-1) ...
Setting up libhtml-template-perl (2.97-1) ...
Setting up libsnappy1v5:amd64 (1.1.8-1build1) ...
Setting up socat (1.7.3.3-2) ...
Setting up mariadb-server-core-10.3 (1:10.3.39-0ubuntu0.20.04.2) ...
Setting up galera-3 (25.3.29-1) ...
Setting up mariadb-client-core-10.3 (1:10.3.39-0ubuntu0.20.04.2) ...
Setting up libfcgi-perl (0.79-1) ...
Setting up libterm-readkey-perl (2.38-1build1) ...
Setting up libdbi-perl:amd64 (1.643-1ubuntu0.1) ...
Setting up libcgi-fast-perl (1:2.15-1) ...
Setting up mariadb-client-10.3 (1:10.3.39-0ubuntu0.20.04.2) ...
Setting up libdbd-mysql-perl:amd64 (4.050-3ubuntu0.2) ...
Setting up mariadb-server-10.3 (1:10.3.39-0ubuntu0.20.04.2) ...
Created symlink /etc/systemd/system/mysql.service โ†’ /lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/mysqld.service โ†’ /lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/multi-user.target.wants/mariadb.service โ†’ /lib/systemd/system/mariadb.service.
Setting up mariadb-client (1:10.3.39-0ubuntu0.20.04.2) ...
Setting up mariadb-server (1:10.3.39-0ubuntu0.20.04.2) ...
Processing triggers for systemd (245.4-4ubuntu3.23) ...
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for libc-bin (2.31-0ubuntu9.16) ...

root@06432e7f922c:/home/cloud_user# systemctl start mariadb
root@06432e7f922c:/home/cloud_user# systemctl status mariadb
โ— mariadb.service - MariaDB 10.3.39 database server
     Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2024-06-03 17:35:59 UTC; 4min 24s ago
       Docs: man:mysqld(8)
             https://mariadb.com/kb/en/library/systemd/
   Main PID: 34046 (mysqld)
     Status: "Taking your SQL requests now..."
      Tasks: 31 (limit: 2299)
     Memory: 63.1M
     CGroup: /system.slice/mariadb.service
             โ””โ”€34046 /usr/sbin/mysqld

Jun 03 17:35:59 06432e7f922c.mylabserver.com systemd[1]: Starting MariaDB 10.3.39 database server...
Jun 03 17:35:59 06432e7f922c.mylabserver.com systemd[1]: Started MariaDB 10.3.39 database server.
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34081]: Upgrading MySQL tables if necessary.
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34084]: Looking for 'mysql' as: /usr/bin/mysql
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34084]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34084]: This installation of MariaDB is already upgraded to 10.3.39-MariaDB.
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34084]: There is no need to run mysql_upgrade again for 10.3.39-MariaDB.
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34084]: You can use --force if you still want to run mysql_upgrade
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34092]: Checking for insecure root accounts.
Jun 03 17:35:59 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34097]: Triggering myisam-recover for all MyISAM tables and aria-recover for all Aria tables

Install Mysql

root@06432e7f922c:/home/cloud_user# mysql_secure_installation 

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
root@06432e7f922c:/home/cloud_user# mysql_secure_installation 

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] Y
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] Y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

Testing the connection

root@06432e7f922c:/home/cloud_user# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 44
Server version: 10.3.39-MariaDB-0ubuntu0.20.04.2 Ubuntu 20.04

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> 

root@06432e7f922c:/home/cloud_user# cat /etc/mysql/mariadb.conf.d/50-server.cnf | grep bind
bind-address            = 0.0.0.0

root@06432e7f922c:/home/cloud_user# systemctl restart mariadb
root@06432e7f922c:/home/cloud_user# systemctl status mariadb
โ— mariadb.service - MariaDB 10.3.39 database server
     Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2024-06-03 17:56:26 UTC; 6s ago
       Docs: man:mysqld(8)
             https://mariadb.com/kb/en/library/systemd/
    Process: 34815 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS)
    Process: 34816 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
    Process: 34818 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= ||   VAR=`cd /usr/bin/..; /usr/bin/galera_recovery`; [ $? -eq 0 ]   && systemctl set-e>
    Process: 34897 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
    Process: 34899 ExecStartPost=/etc/mysql/debian-start (code=exited, status=0/SUCCESS)
   Main PID: 34866 (mysqld)
     Status: "Taking your SQL requests now..."
      Tasks: 31 (limit: 2299)
     Memory: 62.9M
     CGroup: /system.slice/mariadb.service
             โ””โ”€34866 /usr/sbin/mysqld

Jun 03 17:56:26 06432e7f922c.mylabserver.com systemd[1]: Starting MariaDB 10.3.39 database server...
Jun 03 17:56:26 06432e7f922c.mylabserver.com systemd[1]: Started MariaDB 10.3.39 database server.
Jun 03 17:56:26 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34901]: Upgrading MySQL tables if necessary.
Jun 03 17:56:26 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34904]: Looking for 'mysql' as: /usr/bin/mysql
Jun 03 17:56:26 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34904]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck
Jun 03 17:56:26 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34904]: This installation of MariaDB is already upgraded to 10.3.39-MariaDB.
Jun 03 17:56:26 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34904]: There is no need to run mysql_upgrade again for 10.3.39-MariaDB.
Jun 03 17:56:26 06432e7f922c.mylabserver.com /etc/mysql/debian-start[34904]: You can use --force if you still want to run mysql_upgrade

root@06432e7f922c:/home/cloud_user# netstat -pant | grep 3306
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      34866/mysqld

Create User, Database and grant permission

root@06432e7f922c:/home/cloud_user# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 36
Server version: 10.3.39-MariaDB-0ubuntu0.20.04.2 Ubuntu 20.04

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create database book_info;
Query OK, 1 row affected (0.000 sec)

MariaDB [(none)]> create user 'vault'@'%' identified by '<your-password>';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> grant all privileges on book_info.* to 'vault'@'%' with grant option; 
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> grant create user on *.* to 'vault'@'%';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.000 sec)

Create Table in Book_info Database and put some data

MariaDB [(none)]> use book_info
Database changed
MariaDB [book_info]> CREATE TABLE book_info (
    ->     id INT AUTO_INCREMENT PRIMARY KEY,
    ->     bookname VARCHAR(255) NOT NULL,
    ->     authorname VARCHAR(255) NOT NULL
    -> );
Query OK, 0 rows affected (0.078 sec)

MariaDB [book_info]> INSERT INTO book_info (bookname, authorname) VALUES
    -> ('To Kill a Mockingbird', 'Harper Lee'),
    -> ('1984', 'George Orwell'),
    -> ('Pride and Prejudice', 'Jane Austen'),
    -> ('The Great Gatsby', 'F. Scott Fitzgerald'),
    -> ('Moby Dick', 'Herman Melville');
Query OK, 5 rows affected (0.016 sec)
Records: 5  Duplicates: 0  Warnings: 0

MariaDB [book_info]> SELECT * FROM book_info;
+----+-----------------------+---------------------+
| id | bookname              | authorname          |
+----+-----------------------+---------------------+
|  1 | To Kill a Mockingbird | Harper Lee          |
|  2 | 1984                  | George Orwell       |
|  3 | Pride and Prejudice   | Jane Austen         |
|  4 | The Great Gatsby      | F. Scott Fitzgerald |
|  5 | Moby Dick             | Herman Melville     |
+----+-----------------------+---------------------+
5 rows in set (0.000 sec)

Enable Vault for Database

root@06432e7f921c:~# vault secrets enable database
2024-06-03T18:17:42.947Z [INFO]  secrets.database.database_b9224b4f: initializing database rotation queue
2024-06-03T18:17:42.949Z [INFO]  core: successful mount: namespace="" path=database/ type=database version="v1.16.2+builtin.vault"
Success! Enabled the database secrets engine at: database/
root@06432e7f921c:~# 2024-06-03T18:17:42.965Z [INFO]  secrets.database.database_b9224b4f: populating role rotation queue
2024-06-03T18:17:42.965Z [INFO]  secrets.database.database_b9224b4f: starting periodic ticker

Create Role in Vault

root@06432e7f921c:~# vault write database/roles/my-role   db_name=book_info   creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}'; GRANT SELECT O
N book_info.book_info TO '{{name}}'@'%';"   default_ttl="1h"   max_ttl="24h"
Success! Data written to: database/roles/my-role

Store Credential in vault and attach the role

root@06432e7f921c:~# vault write database/config/book_info   plugin_name=mysql-database-plugin   connection_url="{{username}}:{{password}}@tcp(172.31.115.254:3306)/"   allowed_roles="my-role"   username="vault" password='<passowd>'
Success! Data written to: database/config/book_info

Try to Generate and read the password

root@06432e7f921c:~# vault read database/creds/my-role
Key                Value
---                -----
lease_id           database/creds/my-role/qmJjwAzsLOuCitMCHT7G1cJW
lease_duration     1h
lease_renewable    true
password           jJiYrzYY8W5Rf-oRq7Bx
username           v-root-my-role-RVu8N2m0O61mxty3O

** Create Policy file and apply**

root@06432e7f921c:~# cat databasepolicy.hcl 
path "database/creds/book_info" {
  capabilities = ["read"]
}

root@06432e7f921c:~# vault policy write dbpolicy databasepolicy.hcl
Success! Uploaded policy: dbpolicy

root@06432e7f921c:~# vault token create -policy=dbpolicy
Key                  Value
---                  -----
token                hvs.CAESILo8RAJSNiiJAigxRihFXMc9huhYL1zYdkgBl-Pgqo7XGh4KHGh2cy51cHRGY0xhWTBkN2tEMkZ3Y2R5VWw3S0k
token_accessor       HOT77qOXRXUBM9PrLVAcmRCW
token_duration       768h
token_renewable      true
token_policies       ["dbpolicy" "default"]
identity_policies    []
policies             ["dbpolicy" "default"]

Access DB From Client

root@06432e7f923c:/home/cloud_user# mysql -u v-root-my-role-r4yGJElGGzndCcb6N -pmZSQLnhLpy4r-BWcn5-r -h 172.31.115.254
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
MariaDB [(none)]> 

We are able to login to Database using Vault Creds

Strengthening User Authentication: A Guide to Implementing HashiCorp Vault To Authenticate Users On Your Website

Strengthening User Authentication: A Guide to Implementing HashiCorp Vault To Authenticate Users On Your Website

In today's digital age, securing user authentication on websites is paramount. With cyber threats looming large, businesses need robust solutions to protect sensitive user data. HashiCorp Vault emerges as a potent tool in this realm, offering a secure and scalable platform for managing secrets and protecting cryptographic assets. In this blog, we'll delve into the significance of user authentication and explore how HashiCorp Vault can fortify your website's security infrastructure.

Understanding the Importance of User Authentication User authentication serves as the first line of defense against unauthorized access to web resources. Whether it's accessing confidential information, making transactions, or simply logging into an account, users entrust websites with their credentials. Consequently, any compromise in authentication mechanisms can lead to severe repercussions, including data breaches, financial losses, and damage to reputation.

Introducing HashiCorp Vault HashiCorp Vault stands out as a comprehensive solution for secret management and data protection. It offers a centralized repository for securely storing and accessing sensitive information such as API keys, passwords, and encryption keys. Vault employs robust encryption and access control mechanisms to safeguard secrets from unauthorized access, ensuring compliance with stringent security standards.

Implementing Authentication with Vault

Secure Secret Storage: Configure Vault to securely store user authentication secrets using its Key-Value and transit secrets engines. Employ encryption to safeguard data in transit and at rest.

Dynamic Secrets: Leverage Vault's dynamic secrets to generate short-lived, dynamically created credentials for user authentication. Implement RBAC to enforce granular access control.

Tokenization and Authentication Methods: Utilize Vault's authentication methods, like LDAP, OAuth, or JWT, to authenticate users against external identity providers. Integrate MFA for added security.

Auditing and Monitoring: Enable Vault's auditing and monitoring features to track authentication activities. Log authentication requests and access attempts for proactive threat detection.

High Availability and Disaster Recovery: Ensure Vault's high availability across multiple data centers or cloud regions. Implement automated backups and disaster recovery procedures for uninterrupted access.

In this blog, we'll explore the process of authenticating users from a website to HashiCorp Vault. Securing user authentication is crucial for safeguarding sensitive data, and HashiCorp Vault offers a robust solution for managing secrets securely. We'll delve into the significance of user authentication and how implementing HashiCorp Vault can bolster your website's security infrastructure.

Install Vault On Your Server

root@master:~# wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install vault
--2024-04-19 12:52:47--  https://apt.releases.hashicorp.com/gpg
Resolving apt.releases.hashicorp.com (apt.releases.hashicorp.com)... 18.164.144.19, 18.164.144.67, 18.164.144.105, ...
Connecting to apt.releases.hashicorp.com (apt.releases.hashicorp.com)|18.164.144.19|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3980 (3.9K) [binary/octet-stream]
Saving to: โ€˜STDOUTโ€™

-                                                           100%[========================================================================================================================================>]   3.89K

2024-04-19 12:52:52 (1.54 MB/s) - written to stdout [3980/3980]

deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com mantic main
Get:1 https://download.docker.com/linux/ubuntu mantic InRelease [48.8 kB]
Get:2 https://apt.releases.hashicorp.com mantic InRelease [12.9 kB]
Ign:4 https://aquasecurity.github.io/trivy-repo/deb mantic InRelease
Get:5 https://download.docker.com/linux/ubuntu mantic/stable amd64 Packages [11.5 kB]

Start Vault Server

root@master:~# vault server -dev -dev-listen-address=0.0.0.0:8200 &
[1] 7382
root@master:~# ==> Vault server configuration:

             Api Address: http://0.0.0.0:8200
                     Cgo: disabled
         Cluster Address: https://0.0.0.0:8201
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
                 Storage: inmem
                 Version: Vault v1.2.3

WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.

You may need to set the following environment variable:

    $ export VAULT_ADDR='http://0.0.0.0:8200'

The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.

Unseal Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Root Token: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Development mode should NOT be used in production installations!

Login to Vault

root@master:~# export VAULT_ADDR='http://0.0.0.0:8200'
root@master:~# vault login
Token (will be hidden):
WARNING! The VAULT_TOKEN environment variable is set! This takes precedence
over the value set by this command. To use the value set by this command,
unset the VAULT_TOKEN environment variable or set it to the token displayed
below.

Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key                  Value
---                  -----
token                xxxxxxxxxxxxxxxxxxxxxxxxx
token_accessor       jXl034q6KU0r4fuWQR32MEcu
token_duration       โˆž
token_renewable      false
token_policies       ["root"]
identity_policies    []
policies             ["root"]

Create Secret

root@master:~# vault secrets enable -path=web-auth kv
2024-04-19T14:17:44.069+0530 [INFO]  core: successful mount: namespace= path=web-auth/ type=kv
Success! Enabled the kv secrets engine at: web-auth/

Create Password for User

root@master:~# echo "amit@test.com" | sha256sum -

Push The User and Cred in the Vault and Verify

root@master:~# curl -H "X-Vault-Token: xxxxxxxxxxxxxxx" --request POST -d '{"amit@test.com":"f9ac24d8c6d410619017ccca98ca4544b05a74ef48c29335a958ad25dd642ad6"}' http://192.168.0.114:8200/v1/web-auth/creds

root@master:~# curl -H "X-Vault-Token: xxxxxxxxxxxxxxxxxxx" http://192.168.0.114:8200/v1/web-auth/creds | jq
{
  "request_id": "ff9d26fc-6abd-b585-b908-2535c4525c1c",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 2764800,
  "data": {
    "amit@test.com": "f9ac24d8c6d410619017ccca98ca4544b05a74ef48c29335a958ad25dd642ad6"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

enter image description here Create and apply policy

root@master:~# cat policy.hcl
path "web-auth/creds" {
   capabilities = ["read"]
}

root@master:~# vault policy write web-policy policy.hcl
Success! Uploaded policy: web-policy

Create Token against policy and use token to access credentials

root@master:~# vault token create -policy="web-policy" -format=json | jq -r ".auth.client_token" > read_token
root@master:~# cat read_token
s.QGR5JsrF7toZcOGj4rKClcvT

root@master:~# curl -H "X-Vault-Token: s.QGR5JsrF7toZcOGj4rKClcvT" http://192.168.0.114:8200/v1/web-auth/creds | jq
{
  "request_id": "4761279f-d3e8-3451-64d3-b17e5b5b8be7",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 2764800,
  "data": {
    "amit@test.com": "f9ac24d8c6d410619017ccca98ca4544b05a74ef48c29335a958ad25dd642ad6"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Now Lets Create Sample Web Application Using Flask

root@master:~# pip install flask
root@master:~# mkdir -p website/templates;cd website
root@master:~/website# cat view.py
from flask import Flask, render_template, request, jsonify
import hashlib
import requests
import json

app = Flask(__name__)

def get_hashed_vault_creds():
    url = "http://0.0.0.0:8200/v1/web-auth/creds"
    headers = { 'X-Vault-Token' : 's.QGR5JsrF7toZcOGj4rKClcvT' }

    response = requests.get(url, headers=headers)
    my_json = response.json()
    creds = []
    for key, value in my_json['data'].items():
        creds.append(key)
        creds.append(value)
    return creds

def acg_login_view(email, password):
    print("login view called")
    credsHash = hashlib.sha256(password.encode()+ b'\n').hexdigest()
    vault_hash = get_hashed_vault_creds()[1]
    if vault_hash == credsHash:
        return True
    else:
        return False

@app.route('/', methods=['GET', 'POST'])
def login():
    if request.method == 'POST':
        email = request.form['email']
        password = request.form['password']
        if acg_login_view(email, password):
            return jsonify({'message': 'Login successful'}), 200
        else:
            return jsonify({'message': 'Login failed'}), 401

    return render_template('login.html')

if __name__ == '__main__':
    app.run(debug=True)


Create Html Page

root@master:~/website# cat templates/login.html
from flask import Flask, render_template, request, jsonify
import hashlib
import requests
import json

app = Flask(__name__)

def get_hashed_vault_creds():
    url = "http://0.0.0.0:8200/v1/web-auth/creds"
    headers = { 'X-Vault-Token' : 's.QGR5JsrF7toZcOGj4rKClcvT' }

    response = requests.get(url, headers=headers)
    my_json = response.json()
    creds = []
    for key, value in my_json['data'].items():
        creds.append(key)
        creds.append(value)
    return creds

def acg_login_view(email, password):
    print("login view called")
    credsHash = hashlib.sha256(password.encode()+ b'\n').hexdigest()
    vault_hash = get_hashed_vault_creds()[1]
    if vault_hash == credsHash:
        return True
    else:
        return False

@app.route('/', methods=['GET', 'POST'])
def login():
    if request.method == 'POST':
        email = request.form['email']
        password = request.form['password']
        if acg_login_view(email, password):
            return jsonify({'message': 'Login successful'}), 200
        else:
            return jsonify({'message': 'Login failed'}), 401

    return render_template('login.html')

if __name__ == '__main__':
    app.run(debug=True)

root@master:~/website# cat templates/login.html
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Login</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            background-color: #f7f7f7;
            margin: 0;
            padding: 0;
            display: flex;
            justify-content: center;
            align-items: center;
            height: 100vh;
        }

        .login-container {
            background-color: #fff;
            padding: 20px;
            border-radius: 10px;
            box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
            width: 300px;
        }

        .login-container h2 {
            margin-top: 0;
            text-align: center;
            color: #333;
        }

        .login-form label {
            font-weight: bold;
            color: #555;
        }

        .login-form input[type="email"],
        .login-form input[type="password"] {
            width: 100%;
            padding: 10px;
            margin-bottom: 15px;
            border: 1px solid #ccc;
            border-radius: 5px;
            box-sizing: border-box;
            font-size: 16px;
        }

        .login-form input[type="submit"] {
            width: 100%;
            padding: 10px;
            border: none;
            border-radius: 5px;
            background-color: #007bff;
            color: #fff;
            cursor: pointer;
            font-size: 16px;
            transition: background-color 0.3s ease;
        }

        .login-form input[type="submit"]:hover {
            background-color: #0056b3;
        }
    </style>
</head>
<body>
    <div class="login-container">
        <h2>Login</h2>
        <form class="login-form" method="POST">
            <label for="email">Email:</label><br>
            <input type="email" id="email" name="email" required><br><br>
            <label for="password">Password:</label><br>
            <input type="password" id="password" name="password" required><br><br>
            <input type="submit" value="Login">
        </form>
    </div>
</body>
</html>

Start the Server

root@master:~/website# flask run --host=0.0.0.0 --port=5000
 * Serving Flask app 'view.py'
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:5000
 * Running on http://192.168.0.114:5000
Press CTRL+C to quit

Access the Page

enter image description here

Lets Try To Login With Wrong Password enter image description here enter image description here

Lets Try To Login With Correct Password enter image description here enter image description here

Conclusion Above steps are one of the way, we can integrate Vault securely into our web service for authentication, ensuring that sensitive credentials are protected and authentication is performed reliably.

Automating Kubernetes Installation with Ansible

Automating Kubernetes Installation with Ansible

In today's world of containerized applications, Kubernetes has emerged as the de facto standard for container orchestration. Deploying and managing Kubernetes clusters manually can be complex and time-consuming. Thankfully, automation tools like Ansible provide a powerful way to automate the installation and configuration of Kubernetes clusters.

In this blog post, we'll explore how to use Ansible to automate the installation of a Kubernetes cluster. We'll walk through the steps involved in setting up Ansible, creating playbooks, and deploying Kubernetes on a set of target servers.

SETUP

Ansible Node ansible.mylabserver.com

Master Node master.mylabserver.com

Worker Nodes worker1.mylabserver.com

worker2.mylabserver.com

Prerequisite

1) Ansible must be installed on all the nodes

2) Set Password less authentication between Ansible Node to Master and Worker Nodes

Lets Create Playbook

Create ansible.cfg

root@06432e7f921c:~#mkdir k8s-install
root@06432e7f921c:~#cd k8s-install
root@06432e7f921c:~/k8s-install#cat ansible.cfg
[defaults]
inventory=./inventory
deprecation_warnings=False

Create Inventory File

root@06432e7f921c:~/k8s-install# cat inventory
[master]
master.mylabserver.com

[worker]
worker1.mylabserver.com
worker2.mylabserver.com

Verify they are accessible from Ansible Node

root@06432e7f921c:~/k8s-install# ansible -i inventory -m ping all
worker1.mylabserver.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
worker2.mylabserver.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
master.mylabserver.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}

Create main.yaml file

root@06432e7f921c:~/k8s-install# cat main.yaml
---
- hosts: [master,worker]
  gather_facts: false
  roles:
     - role: k8s-config
       tags: common

- hosts: [master]
  gather_facts: false
  roles:
   - role: master-config
     tags: master

- hosts: [worker]
  gather_facts: false
  roles:
    - role: worker-config
      tags: worker

Create Role

root@06432e7f921c:~/k8s-install# ansible-galaxy init k8s-config
root@06432e7f921c:~/k8s-install# ansible-galaxy init master-config
root@06432e7f921c:~/k8s-install# ansible-galaxy init worker-config

Create Playbook for Common Installation

root@06432e7f921c:~/k8s-install# cat roles/k8s-config/tasks/main.yml
---
- name: Disable swap
  command: swapoff -a
  ignore_errors: yes

- name: Remove swap entry from /etc/fstab
  lineinfile:
    path: /etc/fstab
    state: absent
    regexp: '^.*swap.*$'

- name: Stop firewalld service
  service:
    name: ufw
    state: stopped
    enabled: no

- name: Ensure overlay and br_netfilter are present in containerd.conf
  lineinfile:
    path: /etc/modules-load.d/containerd.conf
    line: "{{ item }}"
    create: yes
  loop:
    - overlay
    - br_netfilter

- name: Load Kernel modules
  modprobe:
     name: "{{ item }}"
  loop:
     - overlay
     - br_netfilter

- name: Ensure kernel settings are present in kubernetes.conf
  lineinfile:
    path: /etc/sysctl.d/kubernetes.conf
    line: "{{ item }}"
    create: yes
  loop:
    - "net.bridge.bridge-nf-call-ip6tables = 1"
    - "net.bridge.bridge-nf-call-iptables = 1"
    - "net.ipv4.ip_forward = 1"

- name: Apply kernel settings
  command: sysctl --system

- name: Set DEBIAN_FRONTEND environment variable
  shell:
    cmd: export DEBIAN_FRONTEND=noninteractive

- name: Add Kubernetes GPG key
  apt_key:
     url: "https://packages.cloud.google.com/apt/doc/apt-key.gpg"
     state: present

- name: Update apt repositories
  apt:
   update_cache: yes

- name: Install required packages
  apt:
     name: "{{ item }}"
     state: present
  loop:
    - apt-transport-https
    - ca-certificates
    - curl
    - gnupg
    - lsb-release

- name: Ensure /etc/apt/keyrings directory exists
  file:
    path: /etc/apt/keyrings
    state: directory
    mode: '0755'

- name: Check if Docker repository file exists
  stat:
    path: /etc/apt/sources.list.d/docker.list
  register: docker_repo_file

- name: Add Docker repository using echo
  shell:
    cmd: 'echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null'
  when: docker_repo_file.stat.exists == false

- name: Run apt update
  apt:
    update_cache: yes


- name: Install containerd.io package
  apt:
    name: containerd.io
    state: present


- name: Generate default containerd configuration
  command:
     cmd: containerd config default > /etc/containerd/config.toml

- name: Set SystemdCgroup to true in containerd configuration
  replace:
     path: /etc/containerd/config.toml
     regexp: 'SystemdCgroup\s*=\s*false'
     replace: 'SystemdCgroup = true'

- name: Restart containerd service
  service:
    name: containerd
    state: restarted

- name: Enable containerd service
  service:
     name: containerd
     enabled: yes

- name: Download Kubernetes GPG key
  shell:
    cmd: curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg --yes

- name: Add Kubernetes repository
  copy:
     content: |
        deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /
     dest: /etc/apt/sources.list.d/kubernetes.list

- name: Update apt repositories
  apt:
    update_cache: yes

- name: Install Kubernetes components
  apt:
    name: "{{ item }}"
    state: present
  loop:
      - kubeadm
      - kubelet
      - kubectl

Create Playbook to Create Cluster

root@06432e7f921c:~/k8s-install# cat roles/master-config/tasks/main.yml
---
- name: Initialize Kubernetes Cluster
  shell:
     cmd: "kubeadm init --apiserver-advertise-address=172.31.120.93 --pod-network-cidr=192.168.0.0/16 >> /root/kubeinit.log 2>/dev/null"
     creates: /root/.kube/config

- name: Deploy Calico network
  shell:
    cmd: "kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml && kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml"
    creates: /etc/kubernetes/admin.conf

- name: Generate and save cluster join command to /joincluster.sh
  shell:
    cmd: "kubeadm token create --print-join-command > /joincluster.sh"
    creates: /joincluster.sh

Create Playbook for worker node

root@06432e7f921c:~/k8s-install# cat roles/worker-config/tasks/main.yml
---
- name: Copy joincluster script to worker nodes
  copy:
    src: /joincluster.sh
    dest: /joincluster.sh
    mode: 0755

- name: Execute joincluster.sh script
  shell:
     cmd: /joincluster.sh

Install Common packages on all the 3 nodes

root@06432e7f921c:~/k8s-install# ansible-playbook main.yaml --tags common

PLAY [master,worker] ************************************************************************************************************************************************************************

TASK [k8s-config : Disable swap] ************************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Remove swap entry from /etc/fstab] ***************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Stop firewalld service] **************************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Ensure overlay and br_netfilter are present in containerd.conf] **********************************************************************************************************
ok: [worker1.mylabserver.com] => (item=overlay)
ok: [master.mylabserver.com] => (item=overlay)
ok: [worker2.mylabserver.com] => (item=overlay)
ok: [worker1.mylabserver.com] => (item=br_netfilter)
ok: [master.mylabserver.com] => (item=br_netfilter)
ok: [worker2.mylabserver.com] => (item=br_netfilter)

TASK [k8s-config : Load Kernel modules] *****************************************************************************************************************************************************
ok: [worker1.mylabserver.com] => (item=overlay)
ok: [master.mylabserver.com] => (item=overlay)
ok: [worker2.mylabserver.com] => (item=overlay)
ok: [worker1.mylabserver.com] => (item=br_netfilter)
ok: [master.mylabserver.com] => (item=br_netfilter)
ok: [worker2.mylabserver.com] => (item=br_netfilter)

TASK [k8s-config : Ensure kernel settings are present in kubernetes.conf] *******************************************************************************************************************
ok: [worker1.mylabserver.com] => (item=net.bridge.bridge-nf-call-ip6tables = 1)
ok: [worker2.mylabserver.com] => (item=net.bridge.bridge-nf-call-ip6tables = 1)
ok: [master.mylabserver.com] => (item=net.bridge.bridge-nf-call-ip6tables = 1)
ok: [worker1.mylabserver.com] => (item=net.bridge.bridge-nf-call-iptables = 1)
ok: [worker2.mylabserver.com] => (item=net.bridge.bridge-nf-call-iptables = 1)
ok: [master.mylabserver.com] => (item=net.bridge.bridge-nf-call-iptables = 1)
ok: [worker1.mylabserver.com] => (item=net.ipv4.ip_forward = 1)
ok: [worker2.mylabserver.com] => (item=net.ipv4.ip_forward = 1)
ok: [master.mylabserver.com] => (item=net.ipv4.ip_forward = 1)

TASK [k8s-config : Apply kernel settings] ***************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Set DEBIAN_FRONTEND environment variable] ********************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Add Kubernetes GPG key] **************************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Update apt repositories] *************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Install required packages] ***********************************************************************************************************************************************
ok: [worker1.mylabserver.com] => (item=apt-transport-https)
ok: [master.mylabserver.com] => (item=apt-transport-https)
ok: [worker2.mylabserver.com] => (item=apt-transport-https)
ok: [worker1.mylabserver.com] => (item=ca-certificates)
ok: [master.mylabserver.com] => (item=ca-certificates)
ok: [worker2.mylabserver.com] => (item=ca-certificates)
ok: [worker1.mylabserver.com] => (item=curl)
ok: [master.mylabserver.com] => (item=curl)
ok: [worker1.mylabserver.com] => (item=gnupg)
ok: [worker2.mylabserver.com] => (item=curl)
ok: [master.mylabserver.com] => (item=gnupg)
ok: [worker1.mylabserver.com] => (item=lsb-release)
ok: [worker2.mylabserver.com] => (item=gnupg)
ok: [master.mylabserver.com] => (item=lsb-release)
ok: [worker2.mylabserver.com] => (item=lsb-release)

TASK [k8s-config : Ensure /etc/apt/keyrings directory exists] *******************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [worker2.mylabserver.com]
ok: [master.mylabserver.com]

TASK [k8s-config : Check if Docker repository file exists] **********************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Add Docker repository using echo] ****************************************************************************************************************************************
skipping: [master.mylabserver.com]
skipping: [worker2.mylabserver.com]
skipping: [worker1.mylabserver.com]

TASK [k8s-config : Run apt update] **********************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [worker2.mylabserver.com]
changed: [master.mylabserver.com]

TASK [k8s-config : Install containerd.io package] *******************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [worker2.mylabserver.com]
changed: [master.mylabserver.com]

TASK [k8s-config : Generate default containerd configuration] *******************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Set SystemdCgroup to true in containerd configuration] *******************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [worker2.mylabserver.com]
ok: [master.mylabserver.com]

TASK [k8s-config : Restart containerd service] **********************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Enable containerd service] ***********************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Download Kubernetes GPG key] *********************************************************************************************************************************************
[WARNING]: Consider using the get_url or uri module rather than running 'curl'.  If you need to use command because get_url or uri is insufficient you can add 'warn: false' to this command
task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]
changed: [worker1.mylabserver.com]

TASK [k8s-config : Add Kubernetes repository] ***********************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Update apt repositories] *************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [worker2.mylabserver.com]
changed: [master.mylabserver.com]

TASK [k8s-config : Install Kubernetes components] *******************************************************************************************************************************************
ok: [worker1.mylabserver.com] => (item=kubeadm)
ok: [master.mylabserver.com] => (item=kubeadm)
ok: [worker2.mylabserver.com] => (item=kubeadm)
ok: [worker1.mylabserver.com] => (item=kubelet)
ok: [master.mylabserver.com] => (item=kubelet)
ok: [worker2.mylabserver.com] => (item=kubelet)
ok: [worker1.mylabserver.com] => (item=kubectl)
ok: [worker2.mylabserver.com] => (item=kubectl)
ok: [master.mylabserver.com] => (item=kubectl)

Install Cluster On Master Node

root@06432e7f921c:~/k8s-install# ansible-playbook main.yaml --tags master
PLAY [master] *******************************************************************************************************************************************************************************

TASK [master-config : Initialize Kubernetes Cluster] ****************************************************************************************************************************************
changed: [master.mylabserver.com]

TASK [master-config : Deploy Calico network] ************************************************************************************************************************************************
changed: [master.mylabserver.com]

TASK [master-config : Generate and save cluster join command to /joincluster.sh] ************************************************************************************************************
changed: [master.mylabserver.com]

PLAY RECAP **********************************************************************************************************************************************************************************
master.mylabserver.com     : ok=26   changed=12   unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

Install Cluster On Worker Node

root@06432e7f921c:~/k8s-install# ansible-playbook main.yaml --tags worker

PLAY [worker] *******************************************************************************************************************************************************************************

TASK [worker-config : Copy joincluster script to worker nodes] ******************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [worker-config : Execute joincluster.sh script] ****************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [worker2.mylabserver.com]

PLAY RECAP **********************************************************************************************************************************************************************************
worker1.mylabserver.com    : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
worker2.mylabserver.com    : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Create a .kube file and copy admin to .kube dir

root@06432e7f921c:~/k8s-install# mkdir /root/.kube
root@06432e7f921c:~/k8s-install# cp /etc/kubernetes/admin.conf /root/.kube/config

Lets Verify Cluster Configured properly

root@06432e7f921c:~/k8s-install# kubectl get nodes
NAME                      STATUS   ROLES           AGE   VERSION
master.mylabserver.com    Ready    control-plane   54m   v1.29.3
worker1.mylabserver.com   Ready    <none>          43m   v1.29.3
worker2.mylabserver.com   Ready    <none>          43m   v1.29.3

Conclusion Automating the installation of Kubernetes using Ansible can significantly simplify the process and reduce the chance of errors. With Ansible playbooks, we can quickly provision and configure Kubernetes clusters on any infrastructure, whether it's on-premises or in the cloud.

In this blog post, we've only scratched the surface of what's possible with Ansible. As you become more familiar with Ansible and Kubernetes, you can further customize your playbooks to meet your specific requirements and integrate additional automation tasks.

Happy automating!

Understanding Kubernetes Security with KubeArmor: Enhancing Container Protection

Understanding Kubernetes Security with KubeArmor: Enhancing Container Protection

Introduction to KubeArmor KubeArmor is an open-source security solution designed specifically for Kubernetes environments. Developed by AhnLab, a leading cybersecurity company, KubeArmor provides fine-grained container-aware security policies to enforce access control, system call filtering, and network policies.

Architecture Overview

enter image description here

The Need for Kubernetes Security As organizations increasingly adopt Kubernetes for their containerized workloads, ensuring the security of these environments becomes crucial. Traditional security measures are often inadequate in containerized environments due to their dynamic nature and the large attack surface they present. KubeArmor addresses these challenges by providing granular security controls tailored to Kubernetes deployments.

Key Features of KubeArmor

Container-Aware Security Policies: KubeArmor allows administrators to define security policies at the container level, ensuring that each workload operates within its designated security context.

System Call Filtering: By intercepting and filtering system calls made by containers, KubeArmor can prevent unauthorized actions and enforce security policies in real-time.

Network Policies: KubeArmor enables administrators to define network policies to control inbound and outbound traffic between containers, helping to prevent lateral movement and unauthorized access.

Audit Logging: KubeArmor logs all security-related events, providing administrators with visibility into container activities and potential security threats.

How KubeArmor Works KubeArmor operates by deploying an agent as a DaemonSet within the Kubernetes cluster. This agent intercepts system calls made by containers and enforces security policies defined by the administrator. By leveraging the Linux Security Module (LSM) framework, KubeArmor integrates seamlessly with the underlying operating system, ensuring minimal performance overhead.

Install kubearmor

root@master:~# curl -sfL http://get.kubearmor.io/ | sudo sh -s -- -b /usr/local/bin
kubearmor/kubearmor-client info checking GitHub for latest tag
kubearmor/kubearmor-client info found version: 1.2.1 for v1.2.1/linux/amd64
kubearmor/kubearmor-client info installed /usr/local/bin/karmor
root@master:~# karmor install
๐Ÿ›ก       Installed helm release : kubearmor-operator
๐Ÿ˜„      KubeArmorConfig created
โŒš๏ธ      This may take a couple of minutes
โ„น๏ธ       Waiting for KubeArmor Snitch to run: โ€”

Verify the Installation

root@master:~# kubectl get all -n kubearmor
NAME                                            READY   STATUS    RESTARTS   AGE
pod/kubearmor-apparmor-containerd-98c2c-6k2mc   1/1     Running   0          44m
pod/kubearmor-apparmor-containerd-98c2c-hhgww   1/1     Running   0          3m19s
pod/kubearmor-apparmor-containerd-98c2c-m2974   1/1     Running   0          45m
pod/kubearmor-controller-575b4b46c5-jnppw       2/2     Running   0          3m10s
pod/kubearmor-operator-5bcfb76b4f-8kkwp         1/1     Running   0          47m
pod/kubearmor-relay-6b59fbf77f-wqp7r            1/1     Running   0          46m

NAME                                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
service/kubearmor                              ClusterIP   10.97.110.240    <none>        32767/TCP   46m
service/kubearmor-controller-metrics-service   ClusterIP   10.103.151.164   <none>        8443/TCP    46m
service/kubearmor-controller-webhook-service   ClusterIP   10.111.41.204    <none>        443/TCP     46m

NAME                                                 DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                                                                                                                                                            AGE
daemonset.apps/kubearmor-apparmor-containerd-98c2c   3         3         3       3            3           kubearmor.io/btf=yes,kubearmor.io/enforcer=apparmor,kubearmor.io/runtime=containerd,kubearmor.io/seccomp=yes,kubearmor.io/socket=run_containerd_containerd.sock,kubernetes.io/os=linux   45m

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kubearmor-controller   1/1     1            1           46m
deployment.apps/kubearmor-operator     1/1     1            1           47m
deployment.apps/kubearmor-relay        1/1     1            1           46m

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/kubearmor-controller-575b4b46c5   1         1         1       3m10s
replicaset.apps/kubearmor-controller-64b5b9d54b   0         0         0       46m
replicaset.apps/kubearmor-operator-5bcfb76b4f     1         1         1       47m
replicaset.apps/kubearmor-relay-6b59fbf77f        1         1         1       46m

Deploy nginx app in default namespace

root@master:~# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

root@master:~# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-748c667d99-qjjxn   1/1     Running   0          34s
root@master:~# POD=$(kubectl get pod -l app=nginx -o name)
root@master:~# echo $POD
pod/nginx-748c667d99-qjjxn

Use Case: Deny Installation of package Management Tool Creating a policy to deny the execution

root@master:~# cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: block-pkg-mgmt-tools-exec
spec:
  selector:
    matchLabels:
      app: nginx
  process:
    matchPaths:
    - path: /usr/bin/apt
    - path: /usr/bin/apt-get
  action:
    Block
EOF
kubearmorpolicy.security.kubearmor.com/block-pkg-mgmt-tools-exec created

Lets Verify installing package

root@master:~# kubectl exec -it $POD -- bash -c "apt update && apt install masscan"
bash: line 1: /usr/bin/apt: Permission denied
command terminated with exit code 126

Use Case: Deny Access to Service Token To mitigate the risk posed by default service account tokens being mounted in every pod, even if not utilized by applications, you can restructure your Kubernetes cluster's service account setup. Policy to restrict access

root@master:~# cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: block-service-access-token-access
spec:
  selector:
    matchLabels:
      app: nginx
  file:
    matchDirectories:
    - dir: /run/secrets/kubernetes.io/serviceaccount/
      recursive: true
  action:
    Block
EOF
kubearmorpolicy.security.kubearmor.com/block-service-access-token-access created

Verify the service token access

root@nginx-748c667d99-qjjxn:/# kubectl exec -it $POD -- bash
root@nginx-748c667d99-qjjxn:/# curl https://$KUBERNETES_PORT_443_TCP_ADDR/api --insecure --header "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)"
cat: /run/secrets/kubernetes.io/serviceaccount/token: Permission denied
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/api\"",
  "reason": "Forbidden",
  "details": {},
  "code": 403

Use Case: Least Permission Policy

root@master:~# kubectl annotate ns default kubearmor-file-posture=block --overwrite
namespace/default annotated
root@master:~# cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: only-allow-nginx-exec
spec:
  selector:
    matchLabels:
      app: nginx
  file:
    matchDirectories:
    - dir: /
      recursive: true
  process:
    matchPaths:
    - path: /usr/sbin/nginx
    - path: /bin/bash
  action:
    Allow
EOF
kubearmorpolicy.security.kubearmor.com/only-allow-nginx-exec created

Verify the Access

root@master:~# kubectl exec -it $POD -- bash -c "chroot"
bash: line 1: /usr/sbin/chroot: Permission denied
command terminated with exit code 126

Benefits of Using KubeArmor Enhanced Security Posture: By enforcing fine-grained security policies at the container level, KubeArmor reduces the attack surface and mitigates the risk of unauthorized access and data breaches.

Compliance and Auditing: KubeArmor's audit logging capabilities help organizations demonstrate compliance with regulatory requirements and industry standards by providing detailed records of security-related events.

Operational Efficiency: With KubeArmor, administrators can centrally manage security policies across their Kubernetes clusters, streamlining security operations and reducing manual overhead.

Conclusion In today's threat landscape, securing Kubernetes environments is paramount for organizations seeking to harness the benefits of containerization without compromising on security. KubeArmor offers a robust solution for enhancing Kubernetes security by providing fine-grained security controls tailored to containerized workloads. By leveraging KubeArmor's capabilities, organizations can mitigate security risks, achieve compliance, and ensure the integrity of their Kubernetes deployments.

Simplifying TLS Certificate Management in Kubernetes with Cert-manager and Vault

Simplifying TLS Certificate Management in Kubernetes with Cert-manager and Vault

Introduction

Cert-manager creates TLS certificates for workloads in your Kubernetes or OpenShift cluster and renews the certificates before they expire. Cert-manager can obtain certificates from a variety of certificate authorities, including: Let's Encrypt, HashiCorp Vault, Venafi and private PKI.

In this blog I am using Hashicorp Vault as Certificate Issuer with Cert-Manager

Login to Vault( I am running vault in my cluster)

root@master:~/vault# kubectl exec -it vault-0 -- /bin/sh

Enable the PKI secrets engine at its default path.

/ $ vault secrets enable pki
Success! Enabled the pki secrets engine at: pki/

Configure the max lease time-to-live

/ $ vault secrets tune -max-lease-ttl=8760h pki
Success! Tuned the secrets engine at: pki/

Generate a self-signed certificate

/ $ vault write pki/root/generate/internal \
>     common_name=arobyte.tech \
>     ttl=8760h
Key              Value
---              -----
certificate      -----BEGIN CERTIFICATE-----
MIIDODCCAiCgAwIBAgIUekLUNWVLV3am8DTRk33Y9KX0t8kwDQYJKoZIhvcNAQEL
BQAwFzEVMBMGA1UEAxMMYXJvYnl0ZS50ZWNoMB4XDTI0MDMxMzE1NDU0NVoXDTI1
MDMxMzE1NDYxNVowFzEVMBMGA1UEAxMMYXJvYnl0ZS50ZWNoMIIBIjANBgkqhkiG

-----END CERTIFICATE-----
expiration       1741880775
issuing_ca       -----BEGIN CERTIFICATE-----
MIIDODCCAiCgAwIBAgIUekLUNWVLV3am8DTRk33Y9KX0t8kwDQYJKoZIhvcNAQEL
BQAwFzEVMBMGA1UEAxMMYXJvYnl0ZS50ZWNoMB4XDTI0MDMxMzE1NDU0NVoXDTI1
zw4bj+X2hQyMqu5QHdFF4n58s9I9M5oq9IIBlMqxQqQdN79UirJc/LTk71roOKi7
PD1A3HmuNnWt04+0f8maI9txbUToWq15t8d5zBoM85sF2AGc04OmQmXvL+cGqImJ
9+RIo+iKIJnLiAMt
-----END CERTIFICATE-----
serial_number    7a:42:d4:35:65:4b:57:76:a6:f0:34:d1:93:7d:d8:f4:a5:f4:b7:c9

Configure the PKI secrets engine certificate issuing and certificate revocation list (CRL) endpoints to use the Vault service in the default namespace.

/ $ vault write pki/config/urls \
>     issuing_certificates="http://vault.vault.svc.cluster.local:8200/v1/pki/ca" \
>     crl_distribution_points="http://vault.vault.svc.cluster.local:8200/v1/pki/crl"
Success! Data written to: pki/config/urls

Configure a role named arobyte-role-tech that enables the creation of certificates robyte.tech domain with any subdomains.

/ $ vault write pki/roles/arobyte-role-tech \
>     allowed_domains=arobyte.tech \
>     allow_subdomains=true \
>     max_ttl=72h
Success! Data written to: pki/roles/arobyte-role-tech

Create a policy named pki that enables read access to the PKI secrets engine paths.

/ $ vault policy write pki - <<EOF
> path "pki*"                        { capabilities = ["read", "list"] }
> path "pki/sign/arobyte-role-com"    { capabilities = ["create", "update"] }
> path "pki/issue/arobyte-role-com"   { capabilities = ["create"] }
> EOF
Success! Uploaded policy: pki

Enable the Kubernetes authentication method.

/ $ vault auth enable kubernetes
Success! Enabled kubernetes auth method at: kubernetes/

Configure the Kubernetes authentication method to use location of the Kubernetes API.

/ $ vault write auth/kubernetes/config \
>     kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
Success! Data written to: auth/kubernetes/config

Create a Kubernetes authentication role named issuer that binds the pki policy with a Kubernetes service account named issuer

/ $ vault write auth/kubernetes/role/issuer \
>     bound_service_account_names=issuer \
>     bound_service_account_namespaces=default \
>     policies=pki \
>     ttl=20m
Success! Data written to: auth/kubernetes/role/issuer

Lets Install Cert-manager

root@master:~# kubectl create namespace cert-manager

Install Jetstack's cert-manager's version 1.12.3 resources.

root@master:~# kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.12.3/cert-manager.crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
root@master:~# helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories
root@master:~# helm repo update
...Successfully got an update from the "jetstack" chart repository
Update Complete. โŽˆHappy Helming!โŽˆ

Install the cert-manager

root@master:~# helm install cert-manager \
    --namespace cert-manager \
    --version v1.12.3 \
  jetstack/cert-manager

NAME: cert-manager
LAST DEPLOYED: Wed Mar 13 21:26:19 2024
NAMESPACE: cert-manager
STATUS: deployed

Check the status

root@master:~# kubectl get pods --namespace cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-65dfbdf7d6-qp5fk              1/1     Running   0          3m44s
cert-manager-cainjector-79f5dbffcf-lh4d6   1/1     Running   0          3m44s
cert-manager-webhook-77b984cc67-8nxhh      1/1     Running   0          3m44s

Create a service account named issuer within the default namespace.

root@master:~# kubectl create serviceaccount issuer
serviceaccount/issuer created

Create a secret definition

root@master:~# cat >> issuer-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: issuer-token
  annotations:
    kubernetes.io/service-account.name: issuer
type: kubernetes.io/service-account-token
EOF
root@master:~# kubectl apply -f issuer-secret.yaml
secret/issuer-token created
root@master:~# kubectl get secrets
NAME                 TYPE                                  DATA   AGE
issuer-token   kubernetes.io/service-account-token   3      7s

Define an Issuer, named vault-issuer, that sets Vault as a certificate issuer.

root@master:~# cat > vault-issuer.yaml <<EOF
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: vault-issuer
  namespace: default
spec:
  vault:
    server: http://vault.vault.svc.cluster.local:8200
    path: pki/sign/arobyte-role-tech
    auth:
      kubernetes:
        mountPath: /v1/auth/kubernetes
        role: issuer
        secretRef:
          name: issuer-token
          key: token
EOF
root@master:~# kubectl apply --filename vault-issuer.yaml
issuer.cert-manager.io/vault-issuer created

Create the arobyte-tech certificate

root@master:~# cat arobyte-tech-cert.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: arobyte-tech
  namespace: default
spec:
  secretName: arobyte-role-tech
  issuerRef:
    name: vault-issuer
  commonName: www.arobyte.tech
  dnsNames:
  - www.arobyte.tech
root@master:~# kubectl apply --filename arobyte-tech-cert.yaml

View the details of the arobyte-tech certificate

root@master:~# kubectl describe certificate.cert-manager arobyte-tech -n default
Name:         arobyte-tech
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  cert-manager.io/v1
Kind:         Certificate


Events:
  Type    Reason     Age    From                                       Message
  ----    ------     ----   ----                                       -------
  Normal  Issuing    2m55s  cert-manager-certificates-trigger          Issuing certificate as Secret does not exist
  Normal  Generated  2m55s  cert-manager-certificates-key-manager      Stored new private key in temporary Secret resource "arobyte-tech-gwkjp"
  Normal  Requested  2m54s  cert-manager-certificates-request-manager  Created new CertificateRequest resource "arobyte-tech-fdvjr"
  Normal  Issuing    27s    cert-manager-certificates-issuing          The certificate has been successfully issued


The certificate reports that it has been issued successfully.

Verify the Certificates

root@master:~# kubectl get certificate
NAME           READY   SECRET              AGE
arobyte-tech   True    arobyte-role-tech   95m


root@master:~# kubectl describe secrets arobyte-role-tech
Name:         arobyte-role-tech
Namespace:    default
Labels:       controller.cert-manager.io/fao=true
Annotations:  cert-manager.io/alt-names: www.arobyte.tech
              cert-manager.io/certificate-name: arobyte-tech
              cert-manager.io/common-name: www.arobyte.tech
              cert-manager.io/ip-sans:
              cert-manager.io/issuer-group:
              cert-manager.io/issuer-kind: Issuer
              cert-manager.io/issuer-name: vault-issuer
              cert-manager.io/uri-sans:

Type:  kubernetes.io/tls

Data
====
tls.key:  1675 bytes
ca.crt:   1176 bytes
tls.crt:  1419 bytes

root@master:~# kubectl describe certificate arobyte-tech
Name:         arobyte-tech
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  cert-manager.io/v1
Kind:         Certificate
Metadata:

Spec:
  Common Name:  www.arobyte.tech
  Dns Names:
    www.arobyte.tech
  Issuer Ref:
    Name:       vault-issuer
  Secret Name:  arobyte-role-tech
Status:
  Conditions:
    Last Transition Time:  2024-03-13T17:43:48Z
    Message:               Certificate is up to date and has not expired
    Observed Generation:   1
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2024-03-16T17:43:47Z
  Not Before:              2024-03-13T17:43:17Z
  Renewal Time:            2024-03-15T17:43:37Z
  Revision:                1
Events:                    <none>


Streamlining Kubernetes Configuration: A GitLab CI/CD Guide with Config-lint Validation

Streamlining Kubernetes Configuration: A GitLab CI/CD Guide with Config-lint Validation

Introduction

Config-lint is a powerful command-line tool designed to streamline the validation of Kubernetes configuration files. By leveraging rules specified in YAML, Config-lint ensures adherence to best practices, security standards, and custom policies. This tool is indispensable for integrating into Continuous Integration and Continuous Deployment (CI/CD) pipelines, allowing seamless validation of configuration changes before deployment.

Integrating Config-lint into CI/CD Pipelines

One of the key benefits of Config-lint is its seamless integration into CI/CD pipelines. By incorporating Config-lint as a step in your pipeline, you can automatically validate Kubernetes configuration files before deployment. This ensures that only compliant configurations are promoted to production environments, reducing the risk of misconfigurations and potential downtime.

Custom Rules with YAML

Config-lint allows users to define custom rules using YAML configuration files. This flexibility enables organizations to enforce specific standards and policies tailored to their environment. Whether it's enforcing naming conventions, resource limits, or security policies, Config-lint's YAML-based rules empower teams to maintain consistency and compliance across Kubernetes configurations.

Validating Helm Charts

In addition to standalone configuration files, Config-lint can also validate Helm charts. Helm is a popular package manager for Kubernetes, and ensuring the integrity of Helm charts is crucial for smooth deployments. With Config-lint, teams can validate Helm charts against predefined rules, ensuring that charts adhere to best practices and organizational standards.

Config-lint simplifies Kubernetes configuration validation by providing a flexible and intuitive toolset. By integrating Config-lint into CI/CD pipelines and leveraging custom YAML rules, organizations can ensure the reliability, security, and compliance of their Kubernetes deployments. With support for Helm charts validation, Config-lint offers a comprehensive solution for maintaining consistency and best practices across Kubernetes environments. Start using Config-lint today to streamline your Kubernetes configuration validation process and elevate your CI/CD workflows to the next level of efficiency and reliability.

Integrating Config-lint into Giltab CICD

1) Docker File for Creating Image which I Will use in Pipeline

root@master:~# cat Dockerfile
FROM ubuntu:latest
MAINTAINER omvedi25@gmail.com
ADD config-lint /usr/local/bin/
ADD helm /usr/local/bin/

2- Build the image and push it to Artifact

root@master:~# docker build -t omvedi25/config-lint:v1.1 .

root@master:~# docker push omvedi25/config-lint:v1.1

3- Create a gitlab-ci.yaml pipeline

enter image description here

4- Create a project for helm Chart

enter image description here

5- Lets Create rules.yaml file putting rules into it which will get validate before pushing the chart to chartmuseum.

version: 1
description: Rules for Kubernetes spec files
type: Kubernetes
files:
  - "*.yaml"
rules:
  - id: POD_RESOURCE_REQUESTS_LIMITS
    severity: FAILURE
    message: Containers in Pod must specify both resource requests and limits
    resource: Pod
    assertions:
      - key: spec.containers[*].resources.requests
        op: notPresent
      - key: spec.containers[*].resources.limits
        op: notPresent
    match: any
    tags:
      - pod

  - id: DEPLOYMENT_RESOURCE_REQUESTS_LIMITS
    severity: FAILURE
    message: Containers in Deployment must specify both resource requests and limits
    resource: Deployment
    assertions:
      - key: spec.template.spec.containers[*].resources.requests
        op: notPresent
      - key: spec.template.spec.containers[*].resources.limits
        op: notPresent
    match: all
    tags:
      - deployment

The above rule will check in the chart for deployment that resource request and limits are mentioned or not.

6- Create gitlab-ci.yaml to run the validation on the charts

---
include:
  - project: 'guilds1/cloud-native-guild/helm/tooling/helm-pipelines'
    file: '/.config-lint.yaml'
    ref: main
  - project: 'guilds1/cloud-native-guild/helm/tooling/helm-pipelines'
    file: '/.helm.yaml'
  - project: 'guilds1/cloud-native-guild/helm/tooling/helm-pipelines'
    file: '/.kind.yaml'
    ref: main

variables:
  CHART: ${CI_PROJECT_NAME}
  IMAGE_HELM_CHART_LINT: "quay.io/helmpack/chart-testing:v3.3.1"
  IMAGE_TEST_DOCS: "renaultdigital/helm-docs:v1.5.0"

stages:
  - pretest
  - validation
  - lint
  - test
  - build
  - make_release
  - publish
  - integration

7- Let's run the pipeline and validate the rules. enter image description here

**We can see rules are working as expected. We can write our custom rules according to requirement to validate the charts with mandatory options **

Other Examples

# wget https://github.com/stelligent/config-lint/blob/master/example-files/rules/kubernetes.yml

# cat kubernetes.yml

Maximizing CI/CD Efficiency: A Guide to GitLab Runner Setup as Selfhosted runner on Kubernetes

Maximizing CI/CD Efficiency: A Guide to GitLab Runner Setup as Selfhosted runner on Kubernetes

Introduction

enter image description here

GitLab Runner is an open-source project that works in conjunction with GitLab CI/CD pipelines to automate the process of building, testing, and deploying software. It can run various types of jobs, including builds, tests, and deployments, based on the instructions provided in the .gitlab-ci.yml configuration file.

Why Use GitLab Runner on Kubernetes?

Scalability: Kubernetes allows for easy scaling of resources. GitLab Runner deployments can dynamically scale based on workload demands, ensuring optimal resource utilization.

Isolation: Kubernetes provides container orchestration, allowing GitLab Runner jobs to run in isolated environments (pods). This isolation ensures that jobs do not interfere with each other and provides security benefits.

Resource Efficiency: GitLab Runner on Kubernetes can efficiently utilize cluster resources by scheduling jobs on available nodes, thereby maximizing resource utilization and minimizing idle capacity.

Consistency: Running GitLab Runners on Kubernetes ensures consistency across different environments, whether it's development, testing, or production. The same Kubernetes environment can be used to run CI/CD pipelines consistently.

Key Components

GitLab Runner: The agent responsible for executing CI/CD jobs defined in .gitlab-ci.yml files. It interacts with GitLab CI/CD and Kubernetes API to schedule and run jobs in Kubernetes pods.

Kubernetes: An open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. GitLab Runner utilizes Kubernetes to manage the lifecycle of CI/CD job pods.

Helm: Helm is a package manager for Kubernetes that allows you to define, install, and manage applications on Kubernetes. GitLab Runner can be deployed on Kubernetes using Helm charts provided by GitLab.

Install Gitlab-runner on Kubernetes

1- Add the helm repo

root@master:~# helm repo add gitlab https://charts.gitlab.io
"gitlab" already exists with the same configuration, skipping

2- Update the repo

root@master:~#helm repo update gitlab
"gitlab" already exists with the same configuration, skipping
root@master:~# helm repo update gitlab
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "gitlab" chart repository
Update Complete. โŽˆHappy Helming!โŽˆ

3- Create a values.yaml file

root@master:~# cat values.yaml
env:
  open:
    STORAGE: local
persistence:
  enabled: true
  accessMode: ReadWriteOnce
  size: 8Gi
  storageClass: "managed-nfs-storage"
root@master:~# cat values.yaml
gitlabUrl: https://gitlab.com/

runnerRegistrationToken: "gitlab-runner-token"

concurrent: 10

checkInterval: 30

rbac:
  create: true
  rules:
    - apiGroups: [""]
      resources: ["pods"]
      verbs: ["list", "get", "watch", "create", "delete"]
    - apiGroups: [""]
      resources: ["pods/exec"]
      verbs: ["create"]
    - apiGroups: [""]
      resources: ["pods/log"]
      verbs: ["get"]
    - apiGroups: [""]
      resources: ["pods/attach"]
      verbs: ["list", "get", "create", "delete", "update"]
    - apiGroups: [""]
      resources: ["secrets"]
      verbs: ["list", "get", "create", "delete", "update"]
    - apiGroups: [""]
      resources: ["configmaps"]
      verbs: ["list", "get", "create", "delete", "update"]

runners:
  privileged: true

  config: |
    [[runners]]
      [runners.kubernetes]
        namespace = "gitlab-runner"
        tls_verify = false
        image = "docker:19"
        privileged = false

4- Create the namespace and deploy the helm

root@master:~# kubectl create ns gitlab-runner

root@master:~# helm install gitlab-runner gitlab/gitlab-runner -f values.yaml --namespace gitlab-runner
NAME: gitlab-runner
LAST DEPLOYED: Mon Mar  4 22:09:02 2024
NAMESPACE: gitlab-runner
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Your GitLab Runner should now be registered against the GitLab instance reachable at: "https://gitlab.com/"

5- Verify the runner

root@master:~# kubectl -n gitlab-runner get pods
NAME                             READY   STATUS    RESTARTS   AGE
gitlab-runner-7b8ff76bff-mptdc   1/1     Running   0          39s

Login to https://gitlab.com and verify the runner registration

enter image description here

Create a project in gitlab enter image description here

Create .gitlab-ci.yml and use kubernetes as tag to run the pipeline on Kubernetes

enter image description here

Run the Pipeline

Run the command to verify a new runner get start in gitlab-runner

root@master:~# kubectl -n gitlab-runner get pods
NAME                                                      READY   STATUS     RESTARTS   AGE
gitlab-runner-7b8ff76bff-mptdc                            1/1     Running    0          18m
runner-kea6jzghg-project-45006412-concurrent-0-1heswzap   0/2     Init:0/1   0          56s

Verify the pipeline in Gitlab UI enter image description here

We are able to run our pipeline jobs on our self-hosted runner in kubernetes