Strengthening User Authentication: A Guide to Implementing HashiCorp Vault To Authenticate Users On Your Website

Strengthening User Authentication: A Guide to Implementing HashiCorp Vault To Authenticate Users On Your Website

In today's digital age, securing user authentication on websites is paramount. With cyber threats looming large, businesses need robust solutions to protect sensitive user data. HashiCorp Vault emerges as a potent tool in this realm, offering a secure and scalable platform for managing secrets and protecting cryptographic assets. In this blog, we'll delve into the significance of user authentication and explore how HashiCorp Vault can fortify your website's security infrastructure.

Understanding the Importance of User Authentication User authentication serves as the first line of defense against unauthorized access to web resources. Whether it's accessing confidential information, making transactions, or simply logging into an account, users entrust websites with their credentials. Consequently, any compromise in authentication mechanisms can lead to severe repercussions, including data breaches, financial losses, and damage to reputation.

Introducing HashiCorp Vault HashiCorp Vault stands out as a comprehensive solution for secret management and data protection. It offers a centralized repository for securely storing and accessing sensitive information such as API keys, passwords, and encryption keys. Vault employs robust encryption and access control mechanisms to safeguard secrets from unauthorized access, ensuring compliance with stringent security standards.

Implementing Authentication with Vault

Secure Secret Storage: Configure Vault to securely store user authentication secrets using its Key-Value and transit secrets engines. Employ encryption to safeguard data in transit and at rest.

Dynamic Secrets: Leverage Vault's dynamic secrets to generate short-lived, dynamically created credentials for user authentication. Implement RBAC to enforce granular access control.

Tokenization and Authentication Methods: Utilize Vault's authentication methods, like LDAP, OAuth, or JWT, to authenticate users against external identity providers. Integrate MFA for added security.

Auditing and Monitoring: Enable Vault's auditing and monitoring features to track authentication activities. Log authentication requests and access attempts for proactive threat detection.

High Availability and Disaster Recovery: Ensure Vault's high availability across multiple data centers or cloud regions. Implement automated backups and disaster recovery procedures for uninterrupted access.

In this blog, we'll explore the process of authenticating users from a website to HashiCorp Vault. Securing user authentication is crucial for safeguarding sensitive data, and HashiCorp Vault offers a robust solution for managing secrets securely. We'll delve into the significance of user authentication and how implementing HashiCorp Vault can bolster your website's security infrastructure.

Install Vault On Your Server

root@master:~# wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install vault
--2024-04-19 12:52:47--  https://apt.releases.hashicorp.com/gpg
Resolving apt.releases.hashicorp.com (apt.releases.hashicorp.com)... 18.164.144.19, 18.164.144.67, 18.164.144.105, ...
Connecting to apt.releases.hashicorp.com (apt.releases.hashicorp.com)|18.164.144.19|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3980 (3.9K) [binary/octet-stream]
Saving to: ‘STDOUT’

-                                                           100%[========================================================================================================================================>]   3.89K

2024-04-19 12:52:52 (1.54 MB/s) - written to stdout [3980/3980]

deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com mantic main
Get:1 https://download.docker.com/linux/ubuntu mantic InRelease [48.8 kB]
Get:2 https://apt.releases.hashicorp.com mantic InRelease [12.9 kB]
Ign:4 https://aquasecurity.github.io/trivy-repo/deb mantic InRelease
Get:5 https://download.docker.com/linux/ubuntu mantic/stable amd64 Packages [11.5 kB]

Start Vault Server

root@master:~# vault server -dev -dev-listen-address=0.0.0.0:8200 &
[1] 7382
root@master:~# ==> Vault server configuration:

             Api Address: http://0.0.0.0:8200
                     Cgo: disabled
         Cluster Address: https://0.0.0.0:8201
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
                 Storage: inmem
                 Version: Vault v1.2.3

WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.

You may need to set the following environment variable:

    $ export VAULT_ADDR='http://0.0.0.0:8200'

The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.

Unseal Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Root Token: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Development mode should NOT be used in production installations!

Login to Vault

root@master:~# export VAULT_ADDR='http://0.0.0.0:8200'
root@master:~# vault login
Token (will be hidden):
WARNING! The VAULT_TOKEN environment variable is set! This takes precedence
over the value set by this command. To use the value set by this command,
unset the VAULT_TOKEN environment variable or set it to the token displayed
below.

Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key                  Value
---                  -----
token                xxxxxxxxxxxxxxxxxxxxxxxxx
token_accessor       jXl034q6KU0r4fuWQR32MEcu
token_duration       ∞
token_renewable      false
token_policies       ["root"]
identity_policies    []
policies             ["root"]

Create Secret

root@master:~# vault secrets enable -path=web-auth kv
2024-04-19T14:17:44.069+0530 [INFO]  core: successful mount: namespace= path=web-auth/ type=kv
Success! Enabled the kv secrets engine at: web-auth/

Create Password for User

root@master:~# echo "amit@test.com" | sha256sum -

Push The User and Cred in the Vault and Verify

root@master:~# curl -H "X-Vault-Token: xxxxxxxxxxxxxxx" --request POST -d '{"amit@test.com":"f9ac24d8c6d410619017ccca98ca4544b05a74ef48c29335a958ad25dd642ad6"}' http://192.168.0.114:8200/v1/web-auth/creds

root@master:~# curl -H "X-Vault-Token: xxxxxxxxxxxxxxxxxxx" http://192.168.0.114:8200/v1/web-auth/creds | jq
{
  "request_id": "ff9d26fc-6abd-b585-b908-2535c4525c1c",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 2764800,
  "data": {
    "amit@test.com": "f9ac24d8c6d410619017ccca98ca4544b05a74ef48c29335a958ad25dd642ad6"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

enter image description here Create and apply policy

root@master:~# cat policy.hcl
path "web-auth/creds" {
   capabilities = ["read"]
}

root@master:~# vault policy write web-policy policy.hcl
Success! Uploaded policy: web-policy

Create Token against policy and use token to access credentials

root@master:~# vault token create -policy="web-policy" -format=json | jq -r ".auth.client_token" > read_token
root@master:~# cat read_token
s.QGR5JsrF7toZcOGj4rKClcvT

root@master:~# curl -H "X-Vault-Token: s.QGR5JsrF7toZcOGj4rKClcvT" http://192.168.0.114:8200/v1/web-auth/creds | jq
{
  "request_id": "4761279f-d3e8-3451-64d3-b17e5b5b8be7",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 2764800,
  "data": {
    "amit@test.com": "f9ac24d8c6d410619017ccca98ca4544b05a74ef48c29335a958ad25dd642ad6"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Now Lets Create Sample Web Application Using Flask

root@master:~# pip install flask
root@master:~# mkdir -p website/templates;cd website
root@master:~/website# cat view.py
from flask import Flask, render_template, request, jsonify
import hashlib
import requests
import json

app = Flask(__name__)

def get_hashed_vault_creds():
    url = "http://0.0.0.0:8200/v1/web-auth/creds"
    headers = { 'X-Vault-Token' : 's.QGR5JsrF7toZcOGj4rKClcvT' }

    response = requests.get(url, headers=headers)
    my_json = response.json()
    creds = []
    for key, value in my_json['data'].items():
        creds.append(key)
        creds.append(value)
    return creds

def acg_login_view(email, password):
    print("login view called")
    credsHash = hashlib.sha256(password.encode()+ b'\n').hexdigest()
    vault_hash = get_hashed_vault_creds()[1]
    if vault_hash == credsHash:
        return True
    else:
        return False

@app.route('/', methods=['GET', 'POST'])
def login():
    if request.method == 'POST':
        email = request.form['email']
        password = request.form['password']
        if acg_login_view(email, password):
            return jsonify({'message': 'Login successful'}), 200
        else:
            return jsonify({'message': 'Login failed'}), 401

    return render_template('login.html')

if __name__ == '__main__':
    app.run(debug=True)


Create Html Page

root@master:~/website# cat templates/login.html
from flask import Flask, render_template, request, jsonify
import hashlib
import requests
import json

app = Flask(__name__)

def get_hashed_vault_creds():
    url = "http://0.0.0.0:8200/v1/web-auth/creds"
    headers = { 'X-Vault-Token' : 's.QGR5JsrF7toZcOGj4rKClcvT' }

    response = requests.get(url, headers=headers)
    my_json = response.json()
    creds = []
    for key, value in my_json['data'].items():
        creds.append(key)
        creds.append(value)
    return creds

def acg_login_view(email, password):
    print("login view called")
    credsHash = hashlib.sha256(password.encode()+ b'\n').hexdigest()
    vault_hash = get_hashed_vault_creds()[1]
    if vault_hash == credsHash:
        return True
    else:
        return False

@app.route('/', methods=['GET', 'POST'])
def login():
    if request.method == 'POST':
        email = request.form['email']
        password = request.form['password']
        if acg_login_view(email, password):
            return jsonify({'message': 'Login successful'}), 200
        else:
            return jsonify({'message': 'Login failed'}), 401

    return render_template('login.html')

if __name__ == '__main__':
    app.run(debug=True)

root@master:~/website# cat templates/login.html
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Login</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            background-color: #f7f7f7;
            margin: 0;
            padding: 0;
            display: flex;
            justify-content: center;
            align-items: center;
            height: 100vh;
        }

        .login-container {
            background-color: #fff;
            padding: 20px;
            border-radius: 10px;
            box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
            width: 300px;
        }

        .login-container h2 {
            margin-top: 0;
            text-align: center;
            color: #333;
        }

        .login-form label {
            font-weight: bold;
            color: #555;
        }

        .login-form input[type="email"],
        .login-form input[type="password"] {
            width: 100%;
            padding: 10px;
            margin-bottom: 15px;
            border: 1px solid #ccc;
            border-radius: 5px;
            box-sizing: border-box;
            font-size: 16px;
        }

        .login-form input[type="submit"] {
            width: 100%;
            padding: 10px;
            border: none;
            border-radius: 5px;
            background-color: #007bff;
            color: #fff;
            cursor: pointer;
            font-size: 16px;
            transition: background-color 0.3s ease;
        }

        .login-form input[type="submit"]:hover {
            background-color: #0056b3;
        }
    </style>
</head>
<body>
    <div class="login-container">
        <h2>Login</h2>
        <form class="login-form" method="POST">
            <label for="email">Email:</label><br>
            <input type="email" id="email" name="email" required><br><br>
            <label for="password">Password:</label><br>
            <input type="password" id="password" name="password" required><br><br>
            <input type="submit" value="Login">
        </form>
    </div>
</body>
</html>

Start the Server

root@master:~/website# flask run --host=0.0.0.0 --port=5000
 * Serving Flask app 'view.py'
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:5000
 * Running on http://192.168.0.114:5000
Press CTRL+C to quit

Access the Page

enter image description here

Lets Try To Login With Wrong Password enter image description here enter image description here

Lets Try To Login With Correct Password enter image description here enter image description here

Conclusion Above steps are one of the way, we can integrate Vault securely into our web service for authentication, ensuring that sensitive credentials are protected and authentication is performed reliably.

Automating Kubernetes Installation with Ansible

Automating Kubernetes Installation with Ansible

In today's world of containerized applications, Kubernetes has emerged as the de facto standard for container orchestration. Deploying and managing Kubernetes clusters manually can be complex and time-consuming. Thankfully, automation tools like Ansible provide a powerful way to automate the installation and configuration of Kubernetes clusters.

In this blog post, we'll explore how to use Ansible to automate the installation of a Kubernetes cluster. We'll walk through the steps involved in setting up Ansible, creating playbooks, and deploying Kubernetes on a set of target servers.

SETUP

Ansible Node ansible.mylabserver.com

Master Node master.mylabserver.com

Worker Nodes worker1.mylabserver.com

worker2.mylabserver.com

Prerequisite

1) Ansible must be installed on all the nodes

2) Set Password less authentication between Ansible Node to Master and Worker Nodes

Lets Create Playbook

Create ansible.cfg

root@06432e7f921c:~#mkdir k8s-install
root@06432e7f921c:~#cd k8s-install
root@06432e7f921c:~/k8s-install#cat ansible.cfg
[defaults]
inventory=./inventory
deprecation_warnings=False

Create Inventory File

root@06432e7f921c:~/k8s-install# cat inventory
[master]
master.mylabserver.com

[worker]
worker1.mylabserver.com
worker2.mylabserver.com

Verify they are accessible from Ansible Node

root@06432e7f921c:~/k8s-install# ansible -i inventory -m ping all
worker1.mylabserver.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
worker2.mylabserver.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
master.mylabserver.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}

Create main.yaml file

root@06432e7f921c:~/k8s-install# cat main.yaml
---
- hosts: [master,worker]
  gather_facts: false
  roles:
     - role: k8s-config
       tags: common

- hosts: [master]
  gather_facts: false
  roles:
   - role: master-config
     tags: master

- hosts: [worker]
  gather_facts: false
  roles:
    - role: worker-config
      tags: worker

Create Role

root@06432e7f921c:~/k8s-install# ansible-galaxy init k8s-config
root@06432e7f921c:~/k8s-install# ansible-galaxy init master-config
root@06432e7f921c:~/k8s-install# ansible-galaxy init worker-config

Create Playbook for Common Installation

root@06432e7f921c:~/k8s-install# cat roles/k8s-config/tasks/main.yml
---
- name: Disable swap
  command: swapoff -a
  ignore_errors: yes

- name: Remove swap entry from /etc/fstab
  lineinfile:
    path: /etc/fstab
    state: absent
    regexp: '^.*swap.*$'

- name: Stop firewalld service
  service:
    name: ufw
    state: stopped
    enabled: no

- name: Ensure overlay and br_netfilter are present in containerd.conf
  lineinfile:
    path: /etc/modules-load.d/containerd.conf
    line: "{{ item }}"
    create: yes
  loop:
    - overlay
    - br_netfilter

- name: Load Kernel modules
  modprobe:
     name: "{{ item }}"
  loop:
     - overlay
     - br_netfilter

- name: Ensure kernel settings are present in kubernetes.conf
  lineinfile:
    path: /etc/sysctl.d/kubernetes.conf
    line: "{{ item }}"
    create: yes
  loop:
    - "net.bridge.bridge-nf-call-ip6tables = 1"
    - "net.bridge.bridge-nf-call-iptables = 1"
    - "net.ipv4.ip_forward = 1"

- name: Apply kernel settings
  command: sysctl --system

- name: Set DEBIAN_FRONTEND environment variable
  shell:
    cmd: export DEBIAN_FRONTEND=noninteractive

- name: Add Kubernetes GPG key
  apt_key:
     url: "https://packages.cloud.google.com/apt/doc/apt-key.gpg"
     state: present

- name: Update apt repositories
  apt:
   update_cache: yes

- name: Install required packages
  apt:
     name: "{{ item }}"
     state: present
  loop:
    - apt-transport-https
    - ca-certificates
    - curl
    - gnupg
    - lsb-release

- name: Ensure /etc/apt/keyrings directory exists
  file:
    path: /etc/apt/keyrings
    state: directory
    mode: '0755'

- name: Check if Docker repository file exists
  stat:
    path: /etc/apt/sources.list.d/docker.list
  register: docker_repo_file

- name: Add Docker repository using echo
  shell:
    cmd: 'echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null'
  when: docker_repo_file.stat.exists == false

- name: Run apt update
  apt:
    update_cache: yes


- name: Install containerd.io package
  apt:
    name: containerd.io
    state: present


- name: Generate default containerd configuration
  command:
     cmd: containerd config default > /etc/containerd/config.toml

- name: Set SystemdCgroup to true in containerd configuration
  replace:
     path: /etc/containerd/config.toml
     regexp: 'SystemdCgroup\s*=\s*false'
     replace: 'SystemdCgroup = true'

- name: Restart containerd service
  service:
    name: containerd
    state: restarted

- name: Enable containerd service
  service:
     name: containerd
     enabled: yes

- name: Download Kubernetes GPG key
  shell:
    cmd: curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg --yes

- name: Add Kubernetes repository
  copy:
     content: |
        deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /
     dest: /etc/apt/sources.list.d/kubernetes.list

- name: Update apt repositories
  apt:
    update_cache: yes

- name: Install Kubernetes components
  apt:
    name: "{{ item }}"
    state: present
  loop:
      - kubeadm
      - kubelet
      - kubectl

Create Playbook to Create Cluster

root@06432e7f921c:~/k8s-install# cat roles/master-config/tasks/main.yml
---
- name: Initialize Kubernetes Cluster
  shell:
     cmd: "kubeadm init --apiserver-advertise-address=172.31.120.93 --pod-network-cidr=192.168.0.0/16 >> /root/kubeinit.log 2>/dev/null"
     creates: /root/.kube/config

- name: Deploy Calico network
  shell:
    cmd: "kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml && kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml"
    creates: /etc/kubernetes/admin.conf

- name: Generate and save cluster join command to /joincluster.sh
  shell:
    cmd: "kubeadm token create --print-join-command > /joincluster.sh"
    creates: /joincluster.sh

Create Playbook for worker node

root@06432e7f921c:~/k8s-install# cat roles/worker-config/tasks/main.yml
---
- name: Copy joincluster script to worker nodes
  copy:
    src: /joincluster.sh
    dest: /joincluster.sh
    mode: 0755

- name: Execute joincluster.sh script
  shell:
     cmd: /joincluster.sh

Install Common packages on all the 3 nodes

root@06432e7f921c:~/k8s-install# ansible-playbook main.yaml --tags common

PLAY [master,worker] ************************************************************************************************************************************************************************

TASK [k8s-config : Disable swap] ************************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Remove swap entry from /etc/fstab] ***************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Stop firewalld service] **************************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Ensure overlay and br_netfilter are present in containerd.conf] **********************************************************************************************************
ok: [worker1.mylabserver.com] => (item=overlay)
ok: [master.mylabserver.com] => (item=overlay)
ok: [worker2.mylabserver.com] => (item=overlay)
ok: [worker1.mylabserver.com] => (item=br_netfilter)
ok: [master.mylabserver.com] => (item=br_netfilter)
ok: [worker2.mylabserver.com] => (item=br_netfilter)

TASK [k8s-config : Load Kernel modules] *****************************************************************************************************************************************************
ok: [worker1.mylabserver.com] => (item=overlay)
ok: [master.mylabserver.com] => (item=overlay)
ok: [worker2.mylabserver.com] => (item=overlay)
ok: [worker1.mylabserver.com] => (item=br_netfilter)
ok: [master.mylabserver.com] => (item=br_netfilter)
ok: [worker2.mylabserver.com] => (item=br_netfilter)

TASK [k8s-config : Ensure kernel settings are present in kubernetes.conf] *******************************************************************************************************************
ok: [worker1.mylabserver.com] => (item=net.bridge.bridge-nf-call-ip6tables = 1)
ok: [worker2.mylabserver.com] => (item=net.bridge.bridge-nf-call-ip6tables = 1)
ok: [master.mylabserver.com] => (item=net.bridge.bridge-nf-call-ip6tables = 1)
ok: [worker1.mylabserver.com] => (item=net.bridge.bridge-nf-call-iptables = 1)
ok: [worker2.mylabserver.com] => (item=net.bridge.bridge-nf-call-iptables = 1)
ok: [master.mylabserver.com] => (item=net.bridge.bridge-nf-call-iptables = 1)
ok: [worker1.mylabserver.com] => (item=net.ipv4.ip_forward = 1)
ok: [worker2.mylabserver.com] => (item=net.ipv4.ip_forward = 1)
ok: [master.mylabserver.com] => (item=net.ipv4.ip_forward = 1)

TASK [k8s-config : Apply kernel settings] ***************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Set DEBIAN_FRONTEND environment variable] ********************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Add Kubernetes GPG key] **************************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Update apt repositories] *************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Install required packages] ***********************************************************************************************************************************************
ok: [worker1.mylabserver.com] => (item=apt-transport-https)
ok: [master.mylabserver.com] => (item=apt-transport-https)
ok: [worker2.mylabserver.com] => (item=apt-transport-https)
ok: [worker1.mylabserver.com] => (item=ca-certificates)
ok: [master.mylabserver.com] => (item=ca-certificates)
ok: [worker2.mylabserver.com] => (item=ca-certificates)
ok: [worker1.mylabserver.com] => (item=curl)
ok: [master.mylabserver.com] => (item=curl)
ok: [worker1.mylabserver.com] => (item=gnupg)
ok: [worker2.mylabserver.com] => (item=curl)
ok: [master.mylabserver.com] => (item=gnupg)
ok: [worker1.mylabserver.com] => (item=lsb-release)
ok: [worker2.mylabserver.com] => (item=gnupg)
ok: [master.mylabserver.com] => (item=lsb-release)
ok: [worker2.mylabserver.com] => (item=lsb-release)

TASK [k8s-config : Ensure /etc/apt/keyrings directory exists] *******************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [worker2.mylabserver.com]
ok: [master.mylabserver.com]

TASK [k8s-config : Check if Docker repository file exists] **********************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Add Docker repository using echo] ****************************************************************************************************************************************
skipping: [master.mylabserver.com]
skipping: [worker2.mylabserver.com]
skipping: [worker1.mylabserver.com]

TASK [k8s-config : Run apt update] **********************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [worker2.mylabserver.com]
changed: [master.mylabserver.com]

TASK [k8s-config : Install containerd.io package] *******************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [worker2.mylabserver.com]
changed: [master.mylabserver.com]

TASK [k8s-config : Generate default containerd configuration] *******************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Set SystemdCgroup to true in containerd configuration] *******************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [worker2.mylabserver.com]
ok: [master.mylabserver.com]

TASK [k8s-config : Restart containerd service] **********************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]

TASK [k8s-config : Enable containerd service] ***********************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Download Kubernetes GPG key] *********************************************************************************************************************************************
[WARNING]: Consider using the get_url or uri module rather than running 'curl'.  If you need to use command because get_url or uri is insufficient you can add 'warn: false' to this command
task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [master.mylabserver.com]
changed: [worker2.mylabserver.com]
changed: [worker1.mylabserver.com]

TASK [k8s-config : Add Kubernetes repository] ***********************************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [master.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [k8s-config : Update apt repositories] *************************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [worker2.mylabserver.com]
changed: [master.mylabserver.com]

TASK [k8s-config : Install Kubernetes components] *******************************************************************************************************************************************
ok: [worker1.mylabserver.com] => (item=kubeadm)
ok: [master.mylabserver.com] => (item=kubeadm)
ok: [worker2.mylabserver.com] => (item=kubeadm)
ok: [worker1.mylabserver.com] => (item=kubelet)
ok: [master.mylabserver.com] => (item=kubelet)
ok: [worker2.mylabserver.com] => (item=kubelet)
ok: [worker1.mylabserver.com] => (item=kubectl)
ok: [worker2.mylabserver.com] => (item=kubectl)
ok: [master.mylabserver.com] => (item=kubectl)

Install Cluster On Master Node

root@06432e7f921c:~/k8s-install# ansible-playbook main.yaml --tags master
PLAY [master] *******************************************************************************************************************************************************************************

TASK [master-config : Initialize Kubernetes Cluster] ****************************************************************************************************************************************
changed: [master.mylabserver.com]

TASK [master-config : Deploy Calico network] ************************************************************************************************************************************************
changed: [master.mylabserver.com]

TASK [master-config : Generate and save cluster join command to /joincluster.sh] ************************************************************************************************************
changed: [master.mylabserver.com]

PLAY RECAP **********************************************************************************************************************************************************************************
master.mylabserver.com     : ok=26   changed=12   unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

Install Cluster On Worker Node

root@06432e7f921c:~/k8s-install# ansible-playbook main.yaml --tags worker

PLAY [worker] *******************************************************************************************************************************************************************************

TASK [worker-config : Copy joincluster script to worker nodes] ******************************************************************************************************************************
ok: [worker1.mylabserver.com]
ok: [worker2.mylabserver.com]

TASK [worker-config : Execute joincluster.sh script] ****************************************************************************************************************************************
changed: [worker1.mylabserver.com]
changed: [worker2.mylabserver.com]

PLAY RECAP **********************************************************************************************************************************************************************************
worker1.mylabserver.com    : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
worker2.mylabserver.com    : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Create a .kube file and copy admin to .kube dir

root@06432e7f921c:~/k8s-install# mkdir /root/.kube
root@06432e7f921c:~/k8s-install# cp /etc/kubernetes/admin.conf /root/.kube/config

Lets Verify Cluster Configured properly

root@06432e7f921c:~/k8s-install# kubectl get nodes
NAME                      STATUS   ROLES           AGE   VERSION
master.mylabserver.com    Ready    control-plane   54m   v1.29.3
worker1.mylabserver.com   Ready    <none>          43m   v1.29.3
worker2.mylabserver.com   Ready    <none>          43m   v1.29.3

Conclusion Automating the installation of Kubernetes using Ansible can significantly simplify the process and reduce the chance of errors. With Ansible playbooks, we can quickly provision and configure Kubernetes clusters on any infrastructure, whether it's on-premises or in the cloud.

In this blog post, we've only scratched the surface of what's possible with Ansible. As you become more familiar with Ansible and Kubernetes, you can further customize your playbooks to meet your specific requirements and integrate additional automation tasks.

Happy automating!

Understanding Kubernetes Security with KubeArmor: Enhancing Container Protection

Understanding Kubernetes Security with KubeArmor: Enhancing Container Protection

Introduction to KubeArmor KubeArmor is an open-source security solution designed specifically for Kubernetes environments. Developed by AhnLab, a leading cybersecurity company, KubeArmor provides fine-grained container-aware security policies to enforce access control, system call filtering, and network policies.

Architecture Overview

enter image description here

The Need for Kubernetes Security As organizations increasingly adopt Kubernetes for their containerized workloads, ensuring the security of these environments becomes crucial. Traditional security measures are often inadequate in containerized environments due to their dynamic nature and the large attack surface they present. KubeArmor addresses these challenges by providing granular security controls tailored to Kubernetes deployments.

Key Features of KubeArmor

Container-Aware Security Policies: KubeArmor allows administrators to define security policies at the container level, ensuring that each workload operates within its designated security context.

System Call Filtering: By intercepting and filtering system calls made by containers, KubeArmor can prevent unauthorized actions and enforce security policies in real-time.

Network Policies: KubeArmor enables administrators to define network policies to control inbound and outbound traffic between containers, helping to prevent lateral movement and unauthorized access.

Audit Logging: KubeArmor logs all security-related events, providing administrators with visibility into container activities and potential security threats.

How KubeArmor Works KubeArmor operates by deploying an agent as a DaemonSet within the Kubernetes cluster. This agent intercepts system calls made by containers and enforces security policies defined by the administrator. By leveraging the Linux Security Module (LSM) framework, KubeArmor integrates seamlessly with the underlying operating system, ensuring minimal performance overhead.

Install kubearmor

root@master:~# curl -sfL http://get.kubearmor.io/ | sudo sh -s -- -b /usr/local/bin
kubearmor/kubearmor-client info checking GitHub for latest tag
kubearmor/kubearmor-client info found version: 1.2.1 for v1.2.1/linux/amd64
kubearmor/kubearmor-client info installed /usr/local/bin/karmor
root@master:~# karmor install
🛡       Installed helm release : kubearmor-operator
😄      KubeArmorConfig created
⌚️      This may take a couple of minutes
ℹ️       Waiting for KubeArmor Snitch to run: —

Verify the Installation

root@master:~# kubectl get all -n kubearmor
NAME                                            READY   STATUS    RESTARTS   AGE
pod/kubearmor-apparmor-containerd-98c2c-6k2mc   1/1     Running   0          44m
pod/kubearmor-apparmor-containerd-98c2c-hhgww   1/1     Running   0          3m19s
pod/kubearmor-apparmor-containerd-98c2c-m2974   1/1     Running   0          45m
pod/kubearmor-controller-575b4b46c5-jnppw       2/2     Running   0          3m10s
pod/kubearmor-operator-5bcfb76b4f-8kkwp         1/1     Running   0          47m
pod/kubearmor-relay-6b59fbf77f-wqp7r            1/1     Running   0          46m

NAME                                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
service/kubearmor                              ClusterIP   10.97.110.240    <none>        32767/TCP   46m
service/kubearmor-controller-metrics-service   ClusterIP   10.103.151.164   <none>        8443/TCP    46m
service/kubearmor-controller-webhook-service   ClusterIP   10.111.41.204    <none>        443/TCP     46m

NAME                                                 DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                                                                                                                                                            AGE
daemonset.apps/kubearmor-apparmor-containerd-98c2c   3         3         3       3            3           kubearmor.io/btf=yes,kubearmor.io/enforcer=apparmor,kubearmor.io/runtime=containerd,kubearmor.io/seccomp=yes,kubearmor.io/socket=run_containerd_containerd.sock,kubernetes.io/os=linux   45m

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kubearmor-controller   1/1     1            1           46m
deployment.apps/kubearmor-operator     1/1     1            1           47m
deployment.apps/kubearmor-relay        1/1     1            1           46m

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/kubearmor-controller-575b4b46c5   1         1         1       3m10s
replicaset.apps/kubearmor-controller-64b5b9d54b   0         0         0       46m
replicaset.apps/kubearmor-operator-5bcfb76b4f     1         1         1       47m
replicaset.apps/kubearmor-relay-6b59fbf77f        1         1         1       46m

Deploy nginx app in default namespace

root@master:~# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

root@master:~# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-748c667d99-qjjxn   1/1     Running   0          34s
root@master:~# POD=$(kubectl get pod -l app=nginx -o name)
root@master:~# echo $POD
pod/nginx-748c667d99-qjjxn

Use Case: Deny Installation of package Management Tool Creating a policy to deny the execution

root@master:~# cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: block-pkg-mgmt-tools-exec
spec:
  selector:
    matchLabels:
      app: nginx
  process:
    matchPaths:
    - path: /usr/bin/apt
    - path: /usr/bin/apt-get
  action:
    Block
EOF
kubearmorpolicy.security.kubearmor.com/block-pkg-mgmt-tools-exec created

Lets Verify installing package

root@master:~# kubectl exec -it $POD -- bash -c "apt update && apt install masscan"
bash: line 1: /usr/bin/apt: Permission denied
command terminated with exit code 126

Use Case: Deny Access to Service Token To mitigate the risk posed by default service account tokens being mounted in every pod, even if not utilized by applications, you can restructure your Kubernetes cluster's service account setup. Policy to restrict access

root@master:~# cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: block-service-access-token-access
spec:
  selector:
    matchLabels:
      app: nginx
  file:
    matchDirectories:
    - dir: /run/secrets/kubernetes.io/serviceaccount/
      recursive: true
  action:
    Block
EOF
kubearmorpolicy.security.kubearmor.com/block-service-access-token-access created

Verify the service token access

root@nginx-748c667d99-qjjxn:/# kubectl exec -it $POD -- bash
root@nginx-748c667d99-qjjxn:/# curl https://$KUBERNETES_PORT_443_TCP_ADDR/api --insecure --header "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)"
cat: /run/secrets/kubernetes.io/serviceaccount/token: Permission denied
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/api\"",
  "reason": "Forbidden",
  "details": {},
  "code": 403

Use Case: Least Permission Policy

root@master:~# kubectl annotate ns default kubearmor-file-posture=block --overwrite
namespace/default annotated
root@master:~# cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: only-allow-nginx-exec
spec:
  selector:
    matchLabels:
      app: nginx
  file:
    matchDirectories:
    - dir: /
      recursive: true
  process:
    matchPaths:
    - path: /usr/sbin/nginx
    - path: /bin/bash
  action:
    Allow
EOF
kubearmorpolicy.security.kubearmor.com/only-allow-nginx-exec created

Verify the Access

root@master:~# kubectl exec -it $POD -- bash -c "chroot"
bash: line 1: /usr/sbin/chroot: Permission denied
command terminated with exit code 126

Benefits of Using KubeArmor Enhanced Security Posture: By enforcing fine-grained security policies at the container level, KubeArmor reduces the attack surface and mitigates the risk of unauthorized access and data breaches.

Compliance and Auditing: KubeArmor's audit logging capabilities help organizations demonstrate compliance with regulatory requirements and industry standards by providing detailed records of security-related events.

Operational Efficiency: With KubeArmor, administrators can centrally manage security policies across their Kubernetes clusters, streamlining security operations and reducing manual overhead.

Conclusion In today's threat landscape, securing Kubernetes environments is paramount for organizations seeking to harness the benefits of containerization without compromising on security. KubeArmor offers a robust solution for enhancing Kubernetes security by providing fine-grained security controls tailored to containerized workloads. By leveraging KubeArmor's capabilities, organizations can mitigate security risks, achieve compliance, and ensure the integrity of their Kubernetes deployments.