Accelerating Kubernetes Operations with Kubernetes Go Client.

Accelerating Kubernetes Operations with Kubernetes Go Client.

Introduction: Explore the powerful combination of Kubernetes and Go programming in this blog post. Learn how the Kubernetes Go client empowers developers to seamlessly interact with Kubernetes clusters, providing a flexible and efficient way to automate operations.

Prerequisites:

- Kubernetes Cluster running using minikube, AKS,EKS or GKE

- Go installed in machine.

- Go programming basic understanding

- Go Client Repo

API Resources, Kinds, and Objects Resource Type: In Kubernetes, a "Resource Type" refers to a specific kind of object or entity that can be managed within the Kubernetes cluster. Kubernetes uses a declarative model where users define the desired state of their applications or infrastructure in the form of YAML or JSON manifests. These manifests describe the configuration of various resource types, and Kubernetes is responsible for ensuring that the actual state of the cluster matches the desired state.

API Group: In Kubernetes, an API group is a way to organize and categorize related sets of APIs. The Kubernetes API is designed to be extensible, and API groups help manage the complexity of the API surface by grouping related resources together.The structure of a Kubernetes API endpoint typically follows the pattern:

/<API_GROUP>/<API_VERSION>/...

Here: API_GROUP: Identifies the group to which a particular resource belongs. API_VERSION: Specifies the version of the API.

Object: In Kubernetes, an "object" is a basic building block or unit of the system. Objects represent the state of the cluster and can be created, modified, or deleted to manage applications and other aspects of the system. Objects are defined in Kubernetes manifests, typically written in YAML or JSON, and are submitted to the Kubernetes API server for processing.

Kind: In Kubernetes, "Kind" is a field within the metadata of a Kubernetes object that specifies the type or kind of the object. It is a required field and defines the type of resource being created, modified, or interacted with in the cluster. The kind field indicates the object's role and how it should be handled by Kubernetes. Image description Module k8s.io/api The "k8s.io/client-go" module is the official Go client library for Kubernetes. It provides a set of packages and utilities to interact with the Kubernetes API and perform operations such as creating, updating, and deleting resources, as well as watching for changes in the cluster. To use the Kubernetes Go client libraries in your Go application, you typically import specific packages from "k8s.io/client-go."

Module k8s.io/apimachinery The k8s.io/apimachinery module in Go is part of the Kubernetes Go client libraries and provides a set of packages for working with Kubernetes objects and their metadata. It includes functionality for handling object serialization, conversion, and various utility functions related to Kubernetes API objects. Key packages within k8s.io/apimachinery include:

metav1: This package provides types and functions for working with metadata in Kubernetes objects, such as labels, annotations, and timestamps.

runtime: The runtime package defines interfaces and functions for working with generic Kubernetes runtime objects. It includes serialization, encoding, and decoding functionality.

util: The util package contains utility functions for working with Kubernetes objects, such as conversion functions and label/selector matching.

schema: The schema package defines types and functions related to the schema of Kubernetes objects. It includes functionalities like OpenAPI validation and schema generation.

Here is an example of how k8s.io/apimachinery might be used in conjunction with k8s.io/client-go:

package main

import (
    "fmt"

    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/client-go/kubernetes"
    "k8s.io/client-go/tools/clientcmd"
)

func main() {
    // ... (clientset setup code, similar to the previous example)

    // Example: Working with metav1.ObjectMeta
    labels := map[string]string{"app": "example-app", "env": "production"}
    annotations := map[string]string{"description": "An example pod"}
    objectMeta := metav1.ObjectMeta{
        Name:        "example-pod",
        Namespace:   "default",
        Labels:      labels,
        Annotations: annotations,
    }

    // Example: Using metav1.Time for timestamps
    creationTimestamp := metav1.Time{Time: /* your timestamp here */}
    fmt.Printf("Creation Timestamp: %v\n", creationTimestamp)

    // ... (more examples using k8s.io/apimachinery)
}

Ways of using go client to connect to Kubernetes - Authenticating inside the cluster - Authenticating outside the cluster

In this blog I am using "Authenticating outside the cluster" way.

Demo - Install go client

# wget https://go.dev/dl/go1.21.5.linux-amd64.tar.gz
# tar -C /usr/local -xzf go1.21.5.linux-amd64.tar.gz
# export PATH=$PATH:/usr/local/go/bin
# go version

- Create a client directory and create main.go to connect to k8s cluster

 # mkdir client-go-example
 # cd client-go-example/
 # go mod init client-go-example

**- Lets use go client to connect to k8s cluster and list the nodes **

# vim main.go
package main
import (
        "context"
        "flag"
        "fmt"
        "path/filepath"
        "k8s.io/client-go/kubernetes"
        "k8s.io/client-go/tools/clientcmd"
        "k8s.io/client-go/util/homedir"
        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func main() {
        var kubeconfig *string
        if home := homedir.HomeDir(); home != "" {
                kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
        } else {
                kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
        }
        flag.Parse()
        // use the current context in kubeconfig
        config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
        if err != nil {
                panic(err.Error())
        }

        // create the clientset
        clientset, err := kubernetes.NewForConfig(config)
        if err != nil {
                panic(err.Error())
        }
        nodeList, err := clientset.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
        if err != nil {
                panic(err.Error())
        }
        fmt.Println("Nodes in the cluster:")
        for _, node := range nodeList.Items {
                fmt.Printf("  %s\n", node.GetName())
        }
}

# go build
# go mod tidy
# go run main.go

Image description

- Lets Connect to k8s and collect deployed Namespace in Cluster

# vim main.go
package main

import (
        "context"
        "flag"
        "fmt"
        "path/filepath"

        "k8s.io/client-go/kubernetes"
        "k8s.io/client-go/tools/clientcmd"
        "k8s.io/client-go/util/homedir"
        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

func ListNameSpaces(coreClient kubernetes.Interface) {
        nsList, err := coreClient.CoreV1().
                Namespaces().
                List(context.Background(), metav1.ListOptions{})
        if err != nil {
                panic(err.Error())
        }

        for _, n := range nsList.Items {
                fmt.Printf("  %s\n", n.Name)
        }
}
func main() {
        var kubeconfig *string
        if home := homedir.HomeDir(); home != "" {
                kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
        } else {
                kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
        }
        flag.Parse()

        // use the current context in kubeconfig
        config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
        if err != nil {
                panic(err.Error())
        }

        // create the clientset
        clientset, err := kubernetes.NewForConfig(config)
        if err != nil {
                panic(err.Error())
        }
        ListNameSpaces(clientset)
}

# go run main.go

Image description

- Lets Connect to k8s and collect deployed pods id Namespace in Cluster

package main

import (
        "context"
        "flag"
        "fmt"
        "path/filepath"

        "k8s.io/client-go/kubernetes"
        "k8s.io/client-go/tools/clientcmd"
        "k8s.io/client-go/util/homedir"
        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

func ListNameSpaces(coreClient kubernetes.Interface) {
        nsList, err := coreClient.CoreV1().
                Namespaces().
                List(context.Background(), metav1.ListOptions{})
        if err != nil {
                panic(err.Error())
        }

        for _, n := range nsList.Items {
               ListPods(coreClient, n.Name)
        }
}

func ListPods(coreClient kubernetes.Interface, namespace string){
        fmt.Printf("Pods in namespace: %s\n", namespace)
        pods, err := coreClient.CoreV1().Pods(namespace).List(context.TODO(), metav1.ListOptions{})
        if err != nil {
           fmt.Printf("Error getting pods in namespace %s: %v\n", namespace, err)
           }
        for _, pod := range pods.Items {
             fmt.Printf("  %s\n", pod.Name)
        }
        fmt.Println()
        }
func main() {
        var kubeconfig *string
        if home := homedir.HomeDir(); home != "" {
                kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
        } else {
                kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
        }
        flag.Parse()

        // use the current context in kubeconfig
        config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
        if err != nil {
                panic(err.Error())
        }

        // create the clientset
        clientset, err := kubernetes.NewForConfig(config)
        if err != nil {
                panic(err.Error())
        }
        ListNameSpaces(clientset)
}

# go run main.go

Image description

These are some example how you can communicate with the Kubernetes cluster using go client. In next blog will see how can we mutate the Kubernets apis.

Code your Concepts: A Guide to Diagrams As Code in DevOps World

Code your Concepts: A Guide to Diagrams As Code in DevOps World

Creating diagrams as code has become a popular practice, especially in the context of infrastructure as code (IaC) and documentation. In this blog, we'll explore some of the popular tools for creating diagrams as code and discuss how they can be used in different scenarios.

[- Diagram] (https://diagrams.mingrammer.com/docs/getting-started/installation) Using Diagram we can create diagrams for multiple environment like AWS,AZURE,GCP,Kubernetes.

Requirements

It requires Python 3.6 or higher, check your Python version first. It uses Graphviz to render the diagram, so you need to install Graphviz to use diagrams. After installing graphviz (or already have it), install the diagrams.

# pip install diagrams

(On Ubuntu)
# apt install graphviz

(On RHEL)
# yum install graphviz

Create Diagram For Kubernetes

from diagrams import Cluster, Diagram, Node, Edge
from diagrams.k8s.compute import Pod
from diagrams.k8s.compute import Deploy
from diagrams.k8s.network import Ing
from diagrams.k8s.group import NS
from diagrams.k8s.podconfig import Secret
from diagrams.k8s.storage import PVC
from diagrams.k8s.rbac import CRole
from diagrams.k8s.rbac import CRB

with Diagram("Kubernetes Cluster", show=False):
  with Cluster("Kubernetes"):
    with Cluster("Rbac"):
      rbac = CRB("")
      with Cluster("Role"):
        role = CRole("")
    with Cluster("App"):
      ns = NS("")
      with Cluster("Ingress"):
        ingress = Ing("")
        with Cluster("Secret"):
          secrets = Secret("")
        with Cluster("App"):
          deploy = Deploy("")
          with Cluster("Pods"):
            pod = Pod("")
        with Cluster("PVC"):
           pvc = PVC("")
  rbac >> role >> ns
  ns >> deploy >> pod >> pvc
  pod >> secrets
  deploy >> ingress

Image description

- Mermaid Mermaid is a JavaScript-based diagramming and charting tool that allows users to create diagrams and flowcharts using a simple and human-readable text-based syntax. It is particularly popular for its integration with Markdown, making it easy to embed diagrams directly into documentation, README files, or other text-based formats.

graph LR;
 IpBlockS([IpBlock])-. Traffic Out From <br> The Cluster .->|![Ingress Image](images/ingress.png)| ingress;
 PodNetworkS([PodNetwork])-. Traffic From <br> PodNetwork  .->|![Ingress Image](images/ingress.png)| ingress;
 NameSpaceNetworkS([NameSpaceNetwork])-. Traffic From <br> NameSpaceNetwork  .->|![Ingress Image](images/ingress.png)| ingress;
 ingress .->|routing <br> rule|namespace[namespace];
 subgraph cluster
 ingress;
 namespace .->|routing <br> rule|egress[Egress];
 end
 egress[Egress]-. Traffic Out To <br> The Cluster  .->IpBlockD([IpBlock]);
 egress[Egress]-. Traffic To <br> PodNetwork  .->PodNetworkD([PodNetwork]);
 egress[Egress]-. Traffic To <br> NameSpaceNetwork  .->NameSpaceNetworkD([NameSpaceNetwork]);
 classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
 classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
 classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
 class ingress,namespace,egress k8s;
 class client plain;
 class cluster cluster;

Image description

- PlantUML PlantUML is an open-source tool that allows users to create Unified Modeling Language (UML) diagrams using a simple and human-readable text-based syntax. UML diagrams are widely used in software development to visually represent different aspects of a system's architecture, design, and behavior. PlantUML makes it easy to express complex UML diagrams in a concise and maintainable manner.

@startyaml
!theme lightgray
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
@endyaml

Image description

We have a variety of tools at our disposal, but I've identified these specific ones for drawing diagrams. I've seamlessly integrated them into my GitLab CI/CD pipeline. I encourage you to give them a try; they prove to be highly effective for creating and managing our diagrams.

Hope you will like this blog and start using it in your CI/CD pipelines.

Kubernetes Package Toolkit

Kubernetes Package Toolkit

kpt stands for Kuberentes Package Toolkit. It is a set of tools for working with and managing Kubernetes manifests as packages. kpt helps you to organize, customize, share, and manage Kubernetes manifests in a more modular and reusable way.

Key features of kpt include:

Package Management: kpt allows you to organize your Kubernetes configuration files into packages. A package is a directory containing one or more Kubernetes manifests, and it can be versioned and shared.

Declarative Configuration: It encourages a declarative approach to configuration, where you describe the desired state of your Kubernetes resources, making it easier to manage configurations across different environments.

Resource Configuration: kpt provides commands to work with and transform Kubernetes resources. This includes adding, updating, or removing fields from manifests.

GitOps Workflow: It aligns with the GitOps approach, where changes to your infrastructure are driven by Git commits. You can use kpt to fetch, update, and apply changes to your Kubernetes manifests stored in Git repositories.

Template Functions: kpt supports template functions that allow you to parameterize and customize your manifests based on different environments or requirements.

Resource Composition: You can compose and customize your Kubernetes manifests by using kpt functions and tools.

Image description

System Requirements

  • KPT must be installed
 #wget https://github.com/GoogleContainerTools/kpt/releases/download/v1.0.0-beta.44/kpt_linux_amd64
 #chmod +x kpt_linux_amd64
 #cp kpt_linux_amd64 /usr/local/bin/kpt
 #kpt version
 1.0.0-beta.44
  • Git must be installed
# git version
git version 2.40.1
  • Kubernetes cluster
root@master:~/nginx# kubectl get nodes
NAME                        STATUS   ROLES           AGE   VERSION
master.homecluster.store    Ready    control-plane   38d   v1.26.0
worker1.homecluster.store   Ready    <none>          38d   v1.26.0
worker2.homecluster.store   Ready    <none>          38d   v1.26.0

kpt is fully integrated with Git and enables forking, rebasing and versioning a package of configuration using the underlying Git version control system.

First, let’s fetch the kpt package from Git to your local filesystem:

# kpt pkg get https://github.com/GoogleContainerTools/kpt/package-examples/nginx@v0.9
# cd nginx

kpt pkg commands provide the functionality for working with packages on Git and on your local filesystem.

#kpt pkg tree
Package "nginx"
├── [Kptfile]  Kptfile nginx
├── [deployment.yaml]  Deployment my-nginx
└── [svc.yaml]  Service my-nginx-svc

Apply the Package

kpt live commands provide the functionality for deploying packages to a Kubernetes cluster.

Initialize the kpt package:

 # kpt live init

Apply the resources to the cluster:

#root@master:~/nginx/nginx# kubectl get pods
no resource found
# kpt live apply --reconcile-timeout=15m
inventory update started
inventory update finished
apply phase started
service/my-nginx-svc apply successful
deployment.apps/my-nginx apply successful
apply phase finished
reconcile phase started
service/my-nginx-svc reconcile successful
deployment.apps/my-nginx reconcile pending
deployment.apps/my-nginx reconcile successful
reconcile phase finished
inventory update started
inventory update finished
apply result: 2 attempted, 2 successful, 0 skipped, 0 failed
reconcile result: 2 attempted, 2 successful, 0 skipped, 0 failed, 0 timed out

root@master:~/nginx/nginx# kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
my-nginx-66f8758855-57q7v   1/1     Running   0          81s
my-nginx-66f8758855-5zj88   1/1     Running   0          81s
my-nginx-66f8758855-lbpmq   1/1     Running   0          81s
my-nginx-66f8758855-zp6nm   1/1     Running   0          81s

Delete the package from the cluster:

# kpt live destroy
delete phase started
deployment.apps/my-nginx delete successful
service/my-nginx-svc delete successful
delete phase finished
reconcile phase started
deployment.apps/my-nginx reconcile successful
service/my-nginx-svc reconcile successful
reconcile phase finished
inventory update started
inventory update finished
delete result: 2 attempted, 2 successful, 0 skipped, 0 failed
reconcile result: 2 attempted, 2 successful, 0 skipped, 0 failed, 0 timed out

A Guide to Implement Hierarchical Namespaces for Enhanced Cluster Management

A Guide to Implement Hierarchical Namespaces for Enhanced Cluster Management

Introduction: Hosting numerous users on a single Kubernetes cluster poses challenges due to varying organizational needs. Kubernetes provides building blocks like RBAC and NetworkPolicies for crafting custom tenancy models. Among these, namespaces play a pivotal role, forming the foundation for control plane security and sharing policies.

The Power of Namespaces: Namespaces are key to policy enforcement, representing ownership and controlling authorized creation and use. Objects like RBAC, NetworkPolicies, and ResourceQuotas align with namespaces by default. However, practical limitations arise in scenarios where flexibility is paramount.

Challenges with Vanilla Namespaces: Consider a scenario where a team manages multiple microservices, each requiring different secrets and quotas. Placing these services in separate namespaces is ideal for isolation, but issues arise. Vanilla namespaces lack a common ownership concept for teams managing multiple namespaces, hindering uniform policy application.

Overcoming Limitations with Hierarchical Namespaces: Enter hierarchical namespaces, a groundbreaking concept from the Kubernetes Working Group for Multi-Tenancy. Hierarchical namespaces introduce a small custom resource within a regular namespace, establishing a parent-child relationship. This ownership concept brings two crucial benefits:

Policy Inheritance: Child namespaces inherit policies like RBAC RoleBindings from their parent. Ensures uniform application of policies across namespaces, addressing ownership challenges.

Delegated Creation: Introduces subnamespaces, allowing limited-permission manipulation in the parent namespace. Enables teams to autonomously create subnamespaces without cluster admin intervention.

Solving Dev Team Challenges: Hierarchical namespaces offer a solution for development teams. Cluster administrators can create a root namespace with necessary policies, and then delegate subnamespace creation to team members. This empowers teams to create and manage their namespaces within the defined policies, reducing administrative toil.

Installing HNC Plugin

# Select the latest version of HNC
#HNC_VERSION=v1.1.0
#HNC_VARIANT=default
#kubectl apply -f https://github.com/kubernetes-sigs/hierarchical-namespaces/releases/download/${HNC_VERSION}/${HNC_VARIANT}.yaml 

#kubectl krew update
#kubectl krew install hns

# Ensure the plugin is working
#kubectl hns

Hands-on with hierarchical namespaces

Lets Create namespace

#kubectl create ns test

Lets Create Subnamespace subtest in namespace test

#kubectl hns create subtest -n test

We can view the structure of these namespaces by asking for a tree view

#kubectl hns tree test
test
└── [s] subtest

Lets create Role and Role Binding and attach it to test namespace

#cat role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: test
  name: example-role
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "create", "delete"]


#cat rolebinding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: example-rolebinding
  namespace: test
subjects:
- kind: User
  name: "user1"  # replace with the actual username
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role  # or ClusterRole
  name: example-role
  apiGroup: rbac.authorization.k8s.io

#kubectl apply -f role.yaml
#kubectl apply -f rolebinding.yaml

Lets verify

# kubectl describe rolebinding example-rolebinding -n test
Name:         example-rolebinding
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  Role
  Name:  example-role
Subjects:
  Kind  Name   Namespace
  ----  ----   ---------
  User  user1

We did not apply the Role and Rolebinding on subnamespace, lets verify did the Role and Rolebinding get inherited

# kubectl describe rolebinding example-rolebinding -n subtest
Name:         example-rolebinding
Labels:       app.kubernetes.io/managed-by=hnc.x-k8s.io
              hnc.x-k8s.io/**inherited-from=test**
Annotations:  <none>
Role:
  Kind:  Role
  Name:  example-role
Subjects:
  Kind  Name   Namespace
  ----  ----   ---------
  User  user1

Finally, HNC adds labels to these namespaces with useful information about the hierarchy which you can use to apply other policies

For example, you can create the following NetworkPolicy:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-team-a
  namespace: team-a
spec:
  ingress:
  - from:
    - namespaceSelector:
        matchExpressions:
          - key: 'team-a.tree.hnc.x-k8s.io/depth' # Label created by HNC
            operator: Exists

This policy will both be propagated to all descendants of test, and will also allow ingress traffic between all of those namespaces. The “tree” label can only be applied by HNC, and is guaranteed to reflect the latest hierarchy.