Simplifying TLS Certificate Management in Kubernetes with Cert-manager and Vault

Simplifying TLS Certificate Management in Kubernetes with Cert-manager and Vault

Introduction

Cert-manager creates TLS certificates for workloads in your Kubernetes or OpenShift cluster and renews the certificates before they expire. Cert-manager can obtain certificates from a variety of certificate authorities, including: Let's Encrypt, HashiCorp Vault, Venafi and private PKI.

In this blog I am using Hashicorp Vault as Certificate Issuer with Cert-Manager

Login to Vault( I am running vault in my cluster)

root@master:~/vault# kubectl exec -it vault-0 -- /bin/sh

Enable the PKI secrets engine at its default path.

/ $ vault secrets enable pki
Success! Enabled the pki secrets engine at: pki/

Configure the max lease time-to-live

/ $ vault secrets tune -max-lease-ttl=8760h pki
Success! Tuned the secrets engine at: pki/

Generate a self-signed certificate

/ $ vault write pki/root/generate/internal \
>     common_name=arobyte.tech \
>     ttl=8760h
Key              Value
---              -----
certificate      -----BEGIN CERTIFICATE-----
MIIDODCCAiCgAwIBAgIUekLUNWVLV3am8DTRk33Y9KX0t8kwDQYJKoZIhvcNAQEL
BQAwFzEVMBMGA1UEAxMMYXJvYnl0ZS50ZWNoMB4XDTI0MDMxMzE1NDU0NVoXDTI1
MDMxMzE1NDYxNVowFzEVMBMGA1UEAxMMYXJvYnl0ZS50ZWNoMIIBIjANBgkqhkiG

-----END CERTIFICATE-----
expiration       1741880775
issuing_ca       -----BEGIN CERTIFICATE-----
MIIDODCCAiCgAwIBAgIUekLUNWVLV3am8DTRk33Y9KX0t8kwDQYJKoZIhvcNAQEL
BQAwFzEVMBMGA1UEAxMMYXJvYnl0ZS50ZWNoMB4XDTI0MDMxMzE1NDU0NVoXDTI1
zw4bj+X2hQyMqu5QHdFF4n58s9I9M5oq9IIBlMqxQqQdN79UirJc/LTk71roOKi7
PD1A3HmuNnWt04+0f8maI9txbUToWq15t8d5zBoM85sF2AGc04OmQmXvL+cGqImJ
9+RIo+iKIJnLiAMt
-----END CERTIFICATE-----
serial_number    7a:42:d4:35:65:4b:57:76:a6:f0:34:d1:93:7d:d8:f4:a5:f4:b7:c9

Configure the PKI secrets engine certificate issuing and certificate revocation list (CRL) endpoints to use the Vault service in the default namespace.

/ $ vault write pki/config/urls \
>     issuing_certificates="http://vault.vault.svc.cluster.local:8200/v1/pki/ca" \
>     crl_distribution_points="http://vault.vault.svc.cluster.local:8200/v1/pki/crl"
Success! Data written to: pki/config/urls

Configure a role named arobyte-role-tech that enables the creation of certificates robyte.tech domain with any subdomains.

/ $ vault write pki/roles/arobyte-role-tech \
>     allowed_domains=arobyte.tech \
>     allow_subdomains=true \
>     max_ttl=72h
Success! Data written to: pki/roles/arobyte-role-tech

Create a policy named pki that enables read access to the PKI secrets engine paths.

/ $ vault policy write pki - <<EOF
> path "pki*"                        { capabilities = ["read", "list"] }
> path "pki/sign/arobyte-role-com"    { capabilities = ["create", "update"] }
> path "pki/issue/arobyte-role-com"   { capabilities = ["create"] }
> EOF
Success! Uploaded policy: pki

Enable the Kubernetes authentication method.

/ $ vault auth enable kubernetes
Success! Enabled kubernetes auth method at: kubernetes/

Configure the Kubernetes authentication method to use location of the Kubernetes API.

/ $ vault write auth/kubernetes/config \
>     kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
Success! Data written to: auth/kubernetes/config

Create a Kubernetes authentication role named issuer that binds the pki policy with a Kubernetes service account named issuer

/ $ vault write auth/kubernetes/role/issuer \
>     bound_service_account_names=issuer \
>     bound_service_account_namespaces=default \
>     policies=pki \
>     ttl=20m
Success! Data written to: auth/kubernetes/role/issuer

Lets Install Cert-manager

root@master:~# kubectl create namespace cert-manager

Install Jetstack's cert-manager's version 1.12.3 resources.

root@master:~# kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.12.3/cert-manager.crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
root@master:~# helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories
root@master:~# helm repo update
...Successfully got an update from the "jetstack" chart repository
Update Complete. ⎈Happy Helming!⎈

Install the cert-manager

root@master:~# helm install cert-manager \
    --namespace cert-manager \
    --version v1.12.3 \
  jetstack/cert-manager

NAME: cert-manager
LAST DEPLOYED: Wed Mar 13 21:26:19 2024
NAMESPACE: cert-manager
STATUS: deployed

Check the status

root@master:~# kubectl get pods --namespace cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-65dfbdf7d6-qp5fk              1/1     Running   0          3m44s
cert-manager-cainjector-79f5dbffcf-lh4d6   1/1     Running   0          3m44s
cert-manager-webhook-77b984cc67-8nxhh      1/1     Running   0          3m44s

Create a service account named issuer within the default namespace.

root@master:~# kubectl create serviceaccount issuer
serviceaccount/issuer created

Create a secret definition

root@master:~# cat >> issuer-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: issuer-token
  annotations:
    kubernetes.io/service-account.name: issuer
type: kubernetes.io/service-account-token
EOF
root@master:~# kubectl apply -f issuer-secret.yaml
secret/issuer-token created
root@master:~# kubectl get secrets
NAME                 TYPE                                  DATA   AGE
issuer-token   kubernetes.io/service-account-token   3      7s

Define an Issuer, named vault-issuer, that sets Vault as a certificate issuer.

root@master:~# cat > vault-issuer.yaml <<EOF
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: vault-issuer
  namespace: default
spec:
  vault:
    server: http://vault.vault.svc.cluster.local:8200
    path: pki/sign/arobyte-role-tech
    auth:
      kubernetes:
        mountPath: /v1/auth/kubernetes
        role: issuer
        secretRef:
          name: issuer-token
          key: token
EOF
root@master:~# kubectl apply --filename vault-issuer.yaml
issuer.cert-manager.io/vault-issuer created

Create the arobyte-tech certificate

root@master:~# cat arobyte-tech-cert.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: arobyte-tech
  namespace: default
spec:
  secretName: arobyte-role-tech
  issuerRef:
    name: vault-issuer
  commonName: www.arobyte.tech
  dnsNames:
  - www.arobyte.tech
root@master:~# kubectl apply --filename arobyte-tech-cert.yaml

View the details of the arobyte-tech certificate

root@master:~# kubectl describe certificate.cert-manager arobyte-tech -n default
Name:         arobyte-tech
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  cert-manager.io/v1
Kind:         Certificate


Events:
  Type    Reason     Age    From                                       Message
  ----    ------     ----   ----                                       -------
  Normal  Issuing    2m55s  cert-manager-certificates-trigger          Issuing certificate as Secret does not exist
  Normal  Generated  2m55s  cert-manager-certificates-key-manager      Stored new private key in temporary Secret resource "arobyte-tech-gwkjp"
  Normal  Requested  2m54s  cert-manager-certificates-request-manager  Created new CertificateRequest resource "arobyte-tech-fdvjr"
  Normal  Issuing    27s    cert-manager-certificates-issuing          The certificate has been successfully issued


The certificate reports that it has been issued successfully.

Verify the Certificates

root@master:~# kubectl get certificate
NAME           READY   SECRET              AGE
arobyte-tech   True    arobyte-role-tech   95m


root@master:~# kubectl describe secrets arobyte-role-tech
Name:         arobyte-role-tech
Namespace:    default
Labels:       controller.cert-manager.io/fao=true
Annotations:  cert-manager.io/alt-names: www.arobyte.tech
              cert-manager.io/certificate-name: arobyte-tech
              cert-manager.io/common-name: www.arobyte.tech
              cert-manager.io/ip-sans:
              cert-manager.io/issuer-group:
              cert-manager.io/issuer-kind: Issuer
              cert-manager.io/issuer-name: vault-issuer
              cert-manager.io/uri-sans:

Type:  kubernetes.io/tls

Data
====
tls.key:  1675 bytes
ca.crt:   1176 bytes
tls.crt:  1419 bytes

root@master:~# kubectl describe certificate arobyte-tech
Name:         arobyte-tech
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  cert-manager.io/v1
Kind:         Certificate
Metadata:

Spec:
  Common Name:  www.arobyte.tech
  Dns Names:
    www.arobyte.tech
  Issuer Ref:
    Name:       vault-issuer
  Secret Name:  arobyte-role-tech
Status:
  Conditions:
    Last Transition Time:  2024-03-13T17:43:48Z
    Message:               Certificate is up to date and has not expired
    Observed Generation:   1
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2024-03-16T17:43:47Z
  Not Before:              2024-03-13T17:43:17Z
  Renewal Time:            2024-03-15T17:43:37Z
  Revision:                1
Events:                    <none>


Streamlining Kubernetes Configuration: A GitLab CI/CD Guide with Config-lint Validation

Streamlining Kubernetes Configuration: A GitLab CI/CD Guide with Config-lint Validation

Introduction

Config-lint is a powerful command-line tool designed to streamline the validation of Kubernetes configuration files. By leveraging rules specified in YAML, Config-lint ensures adherence to best practices, security standards, and custom policies. This tool is indispensable for integrating into Continuous Integration and Continuous Deployment (CI/CD) pipelines, allowing seamless validation of configuration changes before deployment.

Integrating Config-lint into CI/CD Pipelines

One of the key benefits of Config-lint is its seamless integration into CI/CD pipelines. By incorporating Config-lint as a step in your pipeline, you can automatically validate Kubernetes configuration files before deployment. This ensures that only compliant configurations are promoted to production environments, reducing the risk of misconfigurations and potential downtime.

Custom Rules with YAML

Config-lint allows users to define custom rules using YAML configuration files. This flexibility enables organizations to enforce specific standards and policies tailored to their environment. Whether it's enforcing naming conventions, resource limits, or security policies, Config-lint's YAML-based rules empower teams to maintain consistency and compliance across Kubernetes configurations.

Validating Helm Charts

In addition to standalone configuration files, Config-lint can also validate Helm charts. Helm is a popular package manager for Kubernetes, and ensuring the integrity of Helm charts is crucial for smooth deployments. With Config-lint, teams can validate Helm charts against predefined rules, ensuring that charts adhere to best practices and organizational standards.

Config-lint simplifies Kubernetes configuration validation by providing a flexible and intuitive toolset. By integrating Config-lint into CI/CD pipelines and leveraging custom YAML rules, organizations can ensure the reliability, security, and compliance of their Kubernetes deployments. With support for Helm charts validation, Config-lint offers a comprehensive solution for maintaining consistency and best practices across Kubernetes environments. Start using Config-lint today to streamline your Kubernetes configuration validation process and elevate your CI/CD workflows to the next level of efficiency and reliability.

Integrating Config-lint into Giltab CICD

1) Docker File for Creating Image which I Will use in Pipeline

root@master:~# cat Dockerfile
FROM ubuntu:latest
MAINTAINER omvedi25@gmail.com
ADD config-lint /usr/local/bin/
ADD helm /usr/local/bin/

2- Build the image and push it to Artifact

root@master:~# docker build -t omvedi25/config-lint:v1.1 .

root@master:~# docker push omvedi25/config-lint:v1.1

3- Create a gitlab-ci.yaml pipeline

enter image description here

4- Create a project for helm Chart

enter image description here

5- Lets Create rules.yaml file putting rules into it which will get validate before pushing the chart to chartmuseum.

version: 1
description: Rules for Kubernetes spec files
type: Kubernetes
files:
  - "*.yaml"
rules:
  - id: POD_RESOURCE_REQUESTS_LIMITS
    severity: FAILURE
    message: Containers in Pod must specify both resource requests and limits
    resource: Pod
    assertions:
      - key: spec.containers[*].resources.requests
        op: notPresent
      - key: spec.containers[*].resources.limits
        op: notPresent
    match: any
    tags:
      - pod

  - id: DEPLOYMENT_RESOURCE_REQUESTS_LIMITS
    severity: FAILURE
    message: Containers in Deployment must specify both resource requests and limits
    resource: Deployment
    assertions:
      - key: spec.template.spec.containers[*].resources.requests
        op: notPresent
      - key: spec.template.spec.containers[*].resources.limits
        op: notPresent
    match: all
    tags:
      - deployment

The above rule will check in the chart for deployment that resource request and limits are mentioned or not.

6- Create gitlab-ci.yaml to run the validation on the charts

---
include:
  - project: 'guilds1/cloud-native-guild/helm/tooling/helm-pipelines'
    file: '/.config-lint.yaml'
    ref: main
  - project: 'guilds1/cloud-native-guild/helm/tooling/helm-pipelines'
    file: '/.helm.yaml'
  - project: 'guilds1/cloud-native-guild/helm/tooling/helm-pipelines'
    file: '/.kind.yaml'
    ref: main

variables:
  CHART: ${CI_PROJECT_NAME}
  IMAGE_HELM_CHART_LINT: "quay.io/helmpack/chart-testing:v3.3.1"
  IMAGE_TEST_DOCS: "renaultdigital/helm-docs:v1.5.0"

stages:
  - pretest
  - validation
  - lint
  - test
  - build
  - make_release
  - publish
  - integration

7- Let's run the pipeline and validate the rules. enter image description here

**We can see rules are working as expected. We can write our custom rules according to requirement to validate the charts with mandatory options **

Other Examples

# wget https://github.com/stelligent/config-lint/blob/master/example-files/rules/kubernetes.yml

# cat kubernetes.yml

Maximizing CI/CD Efficiency: A Guide to GitLab Runner Setup as Selfhosted runner on Kubernetes

Maximizing CI/CD Efficiency: A Guide to GitLab Runner Setup as Selfhosted runner on Kubernetes

Introduction

enter image description here

GitLab Runner is an open-source project that works in conjunction with GitLab CI/CD pipelines to automate the process of building, testing, and deploying software. It can run various types of jobs, including builds, tests, and deployments, based on the instructions provided in the .gitlab-ci.yml configuration file.

Why Use GitLab Runner on Kubernetes?

Scalability: Kubernetes allows for easy scaling of resources. GitLab Runner deployments can dynamically scale based on workload demands, ensuring optimal resource utilization.

Isolation: Kubernetes provides container orchestration, allowing GitLab Runner jobs to run in isolated environments (pods). This isolation ensures that jobs do not interfere with each other and provides security benefits.

Resource Efficiency: GitLab Runner on Kubernetes can efficiently utilize cluster resources by scheduling jobs on available nodes, thereby maximizing resource utilization and minimizing idle capacity.

Consistency: Running GitLab Runners on Kubernetes ensures consistency across different environments, whether it's development, testing, or production. The same Kubernetes environment can be used to run CI/CD pipelines consistently.

Key Components

GitLab Runner: The agent responsible for executing CI/CD jobs defined in .gitlab-ci.yml files. It interacts with GitLab CI/CD and Kubernetes API to schedule and run jobs in Kubernetes pods.

Kubernetes: An open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. GitLab Runner utilizes Kubernetes to manage the lifecycle of CI/CD job pods.

Helm: Helm is a package manager for Kubernetes that allows you to define, install, and manage applications on Kubernetes. GitLab Runner can be deployed on Kubernetes using Helm charts provided by GitLab.

Install Gitlab-runner on Kubernetes

1- Add the helm repo

root@master:~# helm repo add gitlab https://charts.gitlab.io
"gitlab" already exists with the same configuration, skipping

2- Update the repo

root@master:~#helm repo update gitlab
"gitlab" already exists with the same configuration, skipping
root@master:~# helm repo update gitlab
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "gitlab" chart repository
Update Complete. ⎈Happy Helming!⎈

3- Create a values.yaml file

root@master:~# cat values.yaml
env:
  open:
    STORAGE: local
persistence:
  enabled: true
  accessMode: ReadWriteOnce
  size: 8Gi
  storageClass: "managed-nfs-storage"
root@master:~# cat values.yaml
gitlabUrl: https://gitlab.com/

runnerRegistrationToken: "gitlab-runner-token"

concurrent: 10

checkInterval: 30

rbac:
  create: true
  rules:
    - apiGroups: [""]
      resources: ["pods"]
      verbs: ["list", "get", "watch", "create", "delete"]
    - apiGroups: [""]
      resources: ["pods/exec"]
      verbs: ["create"]
    - apiGroups: [""]
      resources: ["pods/log"]
      verbs: ["get"]
    - apiGroups: [""]
      resources: ["pods/attach"]
      verbs: ["list", "get", "create", "delete", "update"]
    - apiGroups: [""]
      resources: ["secrets"]
      verbs: ["list", "get", "create", "delete", "update"]
    - apiGroups: [""]
      resources: ["configmaps"]
      verbs: ["list", "get", "create", "delete", "update"]

runners:
  privileged: true

  config: |
    [[runners]]
      [runners.kubernetes]
        namespace = "gitlab-runner"
        tls_verify = false
        image = "docker:19"
        privileged = false

4- Create the namespace and deploy the helm

root@master:~# kubectl create ns gitlab-runner

root@master:~# helm install gitlab-runner gitlab/gitlab-runner -f values.yaml --namespace gitlab-runner
NAME: gitlab-runner
LAST DEPLOYED: Mon Mar  4 22:09:02 2024
NAMESPACE: gitlab-runner
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Your GitLab Runner should now be registered against the GitLab instance reachable at: "https://gitlab.com/"

5- Verify the runner

root@master:~# kubectl -n gitlab-runner get pods
NAME                             READY   STATUS    RESTARTS   AGE
gitlab-runner-7b8ff76bff-mptdc   1/1     Running   0          39s

Login to https://gitlab.com and verify the runner registration

enter image description here

Create a project in gitlab enter image description here

Create .gitlab-ci.yml and use kubernetes as tag to run the pipeline on Kubernetes

enter image description here

Run the Pipeline

Run the command to verify a new runner get start in gitlab-runner

root@master:~# kubectl -n gitlab-runner get pods
NAME                                                      READY   STATUS     RESTARTS   AGE
gitlab-runner-7b8ff76bff-mptdc                            1/1     Running    0          18m
runner-kea6jzghg-project-45006412-concurrent-0-1heswzap   0/2     Init:0/1   0          56s

Verify the pipeline in Gitlab UI enter image description here

We are able to run our pipeline jobs on our self-hosted runner in kubernetes