Pluto Pluto is a utility to help users find deprecated Kubernetes apiVersions in their code repositories and their helm releases.
Purpose Kubernetes Deprecation Policy Kubernetes is a large system with many components and many contributors. As with any such software, the feature set naturally evolves over time, and sometimes a feature may need to be removed. This could include an API, a flag, or even an entire feature. To avoid breaking existing users, Kubernetes follows a deprecation policy for aspects of the system that are slated to be removed.
Deprecating parts of the API Since Kubernetes is an API-driven system, the API has evolved over time to reflect the evolving understanding of the problem space. The Kubernetes API is actually a set of APIs, called "API groups", and each API group is independently versioned. API versions fall into 3 main tracks, each of which has different policies for deprecation
v1 GA (generally available, stable)
v1beta1 Beta (pre-release)
v1alpha1 Alpha (experimental)
We can use pluto in our CICD pipeline to verify our existing helm templates and keep on updating the templates removing the deprecated API versions.
Lets install Pluto and See how it work
root@master:~# wget https://github.com/FairwindsOps/pluto/releases/download/v5.19.0/pluto_5.19.0_linux_amd64.tar.gz
root@master:~# tar -xvzf pluto_5.19.0_linux_amd64.tar.gz
root@master:~# cp pluto /usr/local/bin/
root@master:~# pluto version
Version:5.19.0 Commit:37f1ee128fd56136729e3491c399bd86e7690e15
Lets Run Pluto to Detect Deprecated version in cluster
root@master:~# pluto detect-helm -owide
There were no resources found with known deprecated apiVersions.
Want more? Automate Pluto for free with Fairwinds Insights!
🚀 https://fairwinds.com/insights-signup/pluto 🚀
Check only one single namespace
root@master:~# pluto detect-helm -n metallb-system -owide
There were no resources found with known deprecated apiVersions.
Want more? Automate Pluto for free with Fairwinds Insights!
🚀 https://fairwinds.com/insights-signup/pluto 🚀
Check helm nginx-ingress on local server
root@master:~# tree nginx-ingress
nginx-ingress
├── Chart.yaml
├── ci
│  ├── daemonset-customconfig-values.yaml
│  ├── daemonset-customnodeport-values.yaml
│  ├── daemonset-headers-values.yaml
│  ├── daemonset-internal-lb-values.yaml
│  ├── daemonset-nodeport-values.yaml
│  ├── daemonset-tcp-udp-configMapNamespace-values.yaml
│  ├── daemonset-tcp-udp-values.yaml
│  ├── daemonset-tcp-values.yaml
│  ├── deamonset-default-values.yaml
│  ├── deamonset-metrics-values.yaml
│  ├── deamonset-psp-values.yaml
│  ├── deamonset-webhook-and-psp-values.yaml
│  ├── deamonset-webhook-values.yaml
│  ├── deployment-autoscaling-values.yaml
│  ├── deployment-customconfig-values.yaml
│  ├── deployment-customnodeport-values.yaml
│  ├── deployment-default-values.yaml
│  ├── deployment-headers-values.yaml
│  ├── deployment-internal-lb-values.yaml
│  ├── deployment-metrics-values.yaml
│  ├── deployment-nodeport-values.yaml
│  ├── deployment-psp-values.yaml
│  ├── deployment-tcp-udp-configMapNamespace-values.yaml
│  ├── deployment-tcp-udp-values.yaml
│  ├── deployment-tcp-values.yaml
│  ├── deployment-webhook-and-psp-values.yaml
│  └── deployment-webhook-values.yaml
├── OWNERS
├── README.md
├── templates
│  ├── addheaders-configmap.yaml
│  ├── admission-webhooks
│  │  ├── job-patch
│  │  │  ├── clusterrolebinding.yaml
│  │  │  ├── clusterrole.yaml
│  │  │  ├── job-createSecret.yaml
│  │  │  ├── job-patchWebhook.yaml
│  │  │  ├── psp.yaml
│  │  │  ├── rolebinding.yaml
│  │  │  ├── role.yaml
│  │  │  └── serviceaccount.yaml
│  │  └── validating-webhook.yaml
│  ├── clusterrolebinding.yaml
│  ├── clusterrole.yaml
│  ├── controller-configmap.yaml
│  ├── controller-daemonset.yaml
│  ├── controller-deployment.yaml
│  ├── controller-hpa.yaml
│  ├── controller-metrics-service.yaml
│  ├── controller-poddisruptionbudget.yaml
│  ├── controller-prometheusrules.yaml
│  ├── controller-psp.yaml
│  ├── controller-rolebinding.yaml
│  ├── controller-role.yaml
│  ├── controller-serviceaccount.yaml
│  ├── controller-service-internal.yaml
│  ├── controller-servicemonitor.yaml
│  ├── controller-service.yaml
│  ├── controller-webhook-service.yaml
│  ├── default-backend-deployment.yaml
│  ├── default-backend-hpa.yaml
│  ├── default-backend-poddisruptionbudget.yaml
│  ├── default-backend-psp.yaml
│  ├── default-backend-rolebinding.yaml
│  ├── default-backend-role.yaml
│  ├── default-backend-serviceaccount.yaml
│  ├── default-backend-service.yaml
│  ├── _helpers.tpl
│  ├── NOTES.txt
│  ├── proxyheaders-configmap.yaml
│  ├── tcp-configmap.yaml
│  └── udp-configmap.yaml
└── values.yaml
5 directories, 71 files
WARNING: This chart is deprecated
There were no resources found with known deprecated apiVersions.
Want more? Automate Pluto for free with Fairwinds Insights!
🚀 https://fairwinds.com/insights-signup/pluto 🚀
Verify API in installed workload in cluster
root@master:~# pluto detect-api-resources -owide
There were no resources found with known deprecated apiVersions.
Want more? Automate Pluto for free with Fairwinds Insights!
🚀 https://fairwinds.com/insights-signup/pluto 🚀
Please refer Pluto Commands to learn more.......
Goldilocks
Goldilocks is a utility that can help you identify a starting point for resource requests and limits.
Goldilocks used kubernetes vertical-pod-autoscaler in recommendation mode to give suggestion for resource requests on each of our apps. This tool creates a VPA for each workload in a namespace and then queries them for information. Once your VPAs are in place, you'll see recommendations appear in the Goldilocks dashboard
To make goldilocks work , VPA must be installed in your cluster
Lets See Goldilocks in function
Assuming VPA is running in your cluster, lets install goldilocks
Lest verify VPS is up and running in Cluster
root@master:~# kubectl --namespace=kube-system get pods|grep vpa
vpa-admission-controller-68dccb8777-hlt7j 1/1 Running 0 22m
vpa-recommender-6b8dc459d8-gxhvf 1/1 Running 0 22m
vpa-updater-6498cd765-r9qg4 1/1 Running 0 22m
root@master:~# helm repo add fairwinds-stable https://charts.fairwinds.com/stable
root@master:~# kubectl create namespace goldilocks
root@master:~# helm install goldilocks --namespace goldilocks fairwinds-stable/goldilocks
Lets Verify the Pods and Service running in goldilocks namespace
root@master:~# kubectl get pods -n goldilocks
NAME READY STATUS RESTARTS AGE
goldilocks-controller-cccfb89f8-6wrls 1/1 Running 0 17m
goldilocks-dashboard-7744d9bddd-82rzb 1/1 Running 0 17m
goldilocks-dashboard-7744d9bddd-tmjjj 1/1 Running 0 17m
root@master:~# kubectl get svc -n goldilocks
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
goldilocks-dashboard LoadBalancer 10.97.78.212 172.16.16.158 80:30407/TCP 17m
Now we have to assign label goldilocks.fairwinds.com/enabled=true to respective namespace which need to monitored by Goldilocks
root@master:~# kubectl label ns default goldilocks.fairwinds.com/enabled=true
Now Let's Access the Goldilocks Dashboard from Browser and if every thing configured as expected, we will see recommendation on dashboard for workloads running in default namespace.