Monday, August 25, 2025

Kubernetes part11

Kubernetes part11

Class 96 Kubernetes Part8 August 24th


Helm :
Helm is a Kubernetes package manager in which the multiple numbers of YAML files such as the backend, and frontend come under one roof(helm) and deploy using helm.

Helm =front end +backend +database

Package installation:
Monitoring the application (we need write, daemonsets, svc,cm & sec,volumes) these are we need to write ,helm package has all these manifest file , we just install helm Similar way argocd has so many resource instead of that simple ,install helm
Deployment:
If you are deploy the application , you need to these manifest file (deploy,svc,cm & sec,ns,volumes,statefulset), one service has multiple manifest files , all are maintained by helm
in future if you want change anything on  service, simple change on helm file


Practical:

Step1: Create instance C7 larger,2 cpu,4 memory
 kops installation 
[root@ip-10-0-0-29 subbu]# curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  274M  100  274M    0     0  73.5M      0  0:00:03  0:00:03 --:--:-- 80.8M
[root@ip-10-0-0-29 subbu]# chmod +x kops
[root@ip-10-0-0-29 subbu]# sudo mv kops /usr/local/bin/kops
[root@ip-10-0-0-29 subbu]# kops version
Client version: 1.33.0 (git-v1.33.0)

[root@ip-10-0-0-29 ~]# aws s3 ls
2025-08-16 16:58:08 ccitpublicbucket16
[root@ip-10-0-0-29 ~]# export KOPS_STATE_STORE=s3://ccitpublicbucket16

Step2: Kops Cluster creation 
Worker node ,we can able taken any, but master node should be odd numbers need to take

[root@ip-10-0-0-29 ~]#kops create cluster --name kopsclstr.k8s.local --zones eu-west-2b,eu-west-2a --master-count 1 --master-size m7i-flex.large --master-volume-size 25 --node-count 2 --node-size c7i-flex.large --node-volume-size=20 --image=ami-044415bb13eee2391
[root@ip-10-0-0-29 ~]#kops update cluster --name kopsclstr.k8s.local --yes --admin

[root@ip-10-0-0-29 ~]# kops validate cluster --wait 10m
[root@ip-10-0-0-29 ~]# kubectl get nodes
NAME                  STATUS   ROLES           AGE    VERSION
i-024f0dd4fa62e367b   Ready    node            92s    v1.32.4
i-06146bd534402f3e7   Ready    control-plane   3m5s   v1.32.4
i-0fcbb7cfc223712de   Ready    node            51s    v1.32.4

Step3: Helm Installation 

https://helm.sh/docs/intro/install/
[root@ip-10-0-0-29 ~]# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
[root@ip-10-0-0-29 ~]# chmod 700 get_helm.sh
[root@ip-10-0-0-29 ~]# ./get_helm.sh
Downloading https://get.helm.sh/helm-v3.18.6-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm

[root@ip-10-0-0-29 ~]# helm version
version.BuildInfo{Version:"v3.18.6", GitCommit:"b76a950f6835474e0906b96c9ec68a2eff3a6430", GitTreeState:"clean", GoVersion:"go1.24.6"}


Step4: Click on Helm chart ,type Jenkins ,it was deployed helm repos , you just add , and install the jenkins

https://artifacthub.io

[root@ip-10-0-0-29 ~]# helm repo add jenkins https://charts.jenkins.io
"jenkins" has been added to your repositories
[root@ip-10-0-0-29 ~]# helm install cicd jenkins/jenkins
NAME: cicd
LAST DEPLOYED: Thu Aug 28 10:30:12 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get your 'admin' user password by running:
  kubectl exec --namespace default -it svc/cicd-jenkins -c jenkins -- /bin/cat /run/secrets/additional/chart-admin-password && echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
  echo http://127.0.0.1:8080
  kubectl --namespace default port-forward svc/cicd-jenkins 8080:8080

3. Login with the password from step 1 and the username: admin
4. Configure security realm and authorization strategy
5. Use Jenkins Configuration as Code by specifying configScripts in your values.yaml file, see documentation: http://127.0.0.1:8080/configuration-as-code and examples: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos

For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine

For more information about Jenkins Configuration as Code, visit:
https://jenkins.io/projects/jcasc/

Jenkins deployed successfully.

[root@ip-10-0-0-29 ~]# helm list
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
cicd    default         1               2025-08-28 10:30:12.90224605 +0000 UTC  deployed        jenkins-5.8.83  2.516.2

Step5:See automatically all pod and service deployed,if you are expose the service we can able to use jenkins 

[root@ip-10-0-0-29 ~]# kubectl get all
NAME                 READY   STATUS    RESTARTS   AGE
pod/cicd-jenkins-0   2/2     Running   0          94s

NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
service/cicd-jenkins         ClusterIP   100.69.238.40   <none>        8080/TCP    94s
service/cicd-jenkins-agent   ClusterIP   100.68.56.47    <none>        50000/TCP   94s
service/kubernetes           ClusterIP   100.64.0.1      <none>        443/TCP     15m

NAME                            READY   AGE
statefulset.apps/cicd-jenkins   1/1     94s

Step6: Jenkin login access key token
[root@ip-10-0-0-29 ~]#  kubectl exec --namespace default -it svc/cicd-jenkins -c jenkins -- /bin/cat /run/secrets/additional/chart-admin-password && echo
enaF8FPbxnGo81wfmwoBj7

Step7: Expose the port ClusterIP to nodeport

[root@ip-10-0-0-29 ~]# kubectl get service cicd-jenkins
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
cicd-jenkins   ClusterIP   100.69.238.40   <none>        8080/TCP   21m

--Changed to Nodeport from clusterport

[root@ip-10-0-0-29 ~]# kubectl patch service cicd-jenkins -p '{"spec":{"type":"NodePort"}}'
service/cicd-jenkins patched
[root@ip-10-0-0-29 ~]# kubectl get service cicd-jenkins
NAME           TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
cicd-jenkins   NodePort   100.69.238.40   <none>        8080:32589/TCP   22m

--This command will give you ,all External-ip node is public for nodes , you able to access using 
[root@ip-10-0-0-29 ~]# kubectl get nodes -o wide
NAME                  STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP      OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
i-024f0dd4fa62e367b   Ready    node            39m   v1.32.4   172.20.55.209    35.177.119.234   Ubuntu 24.04.2 LTS   6.8.0-1029-aws   containerd://1.7.28
i-06146bd534402f3e7   Ready    control-plane   40m   v1.32.4   172.20.209.124   18.130.219.218   Ubuntu 24.04.2 LTS   6.8.0-1029-aws   containerd://1.7.28
i-0fcbb7cfc223712de   Ready    node            38m   v1.32.4   172.20.186.174   13.42.35.13      Ubuntu 24.04.2 LTS   6.8.0-1029-aws   containerd://1.7.28

--Before you need open the security ports to node server.


Step8:uninstall the deployment 
root@ip-10-0-0-29 ~]# helm delete cicd
release "cicd" uninstalled
[root@ip-10-0-0-29 ~]# helm list
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP V

Above one package management for Helm 
  Now are planning create manifest files using helm for deployment
Step1:

[root@ip-10-0-0-29 ~]# helm create ccit
Creating ccit

--By default some file are create in the ccit directory 

[root@ip-10-0-0-29 ~]# cd ccit
[root@ip-10-0-0-29 ccit]# ls
Chart.yaml  charts  templates  values.yaml

--Values.yaml has ,deployment stuff how many replicas you need  

Step2: Change below thing and save  Values.yaml

 replicaCount: 2
 image:
  repository: vakatisubbu/train
 type: LoadBalancer

Step3: Characte.yaml 

appVersion: "latest"

Step4: Successfully deployed 
[root@ip-10-0-0-29 ccit]# helm install release-1 .
NAME: release-1
LAST DEPLOYED: Thu Aug 28 11:27:06 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch its status by running 'kubectl get --namespace default svc -w release-1-ccit'
  export SERVICE_IP=$(kubectl get svc --namespace default release-1-ccit --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
  echo http://$SERVICE_IP:80

Step5: with the deployment, below pods and service created 

[root@ip-10-0-0-29 ccit]# kubectl get all
NAME                                  READY   STATUS    RESTARTS   AGE
pod/release-1-ccit-67c74cd674-hd4xm   1/1     Running   0          6m15s
pod/release-1-ccit-67c74cd674-zg694   1/1     Running   0          6m15s

NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)        AGE
service/kubernetes       ClusterIP      100.64.0.1       <none>                                                                    443/TCP        76m
service/release-1-ccit   LoadBalancer   100.67.212.244   a4b72e79b052941108ae11c17eee7397-1886167053.eu-west-2.elb.amazonaws.com   80:30163/TCP   6m15s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/release-1-ccit   2/2     2            2           6m15s

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/release-1-ccit-67c74cd674   2         2         2       6m15s

With load balancer url ,iam able to access the application


With node ip also working fine
Step6: If you want to change anything replicas need to reduce value.yaml update 

 replicaCount: 4

[root@ip-10-0-0-29 ccit]# helm upgrade release-1 .
Release "release-1" has been upgraded. Happy Helming!
NAME: release-1
LAST DEPLOYED: Thu Aug 28 11:43:52 2025
NAMESPACE: default
STATUS: deployed
REVISION: 2
NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch its status by running 'kubectl get --namespace default svc -w release-1-ccit'
  export SERVICE_IP=$(kubectl get svc --namespace default release-1-ccit --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
  echo http://$SERVICE_IP:80
Step7: See here replicas 4 created 
[root@ip-10-0-0-29 ccit]# kubectl get all
NAME                                  READY   STATUS    RESTARTS   AGE
pod/release-1-ccit-67c74cd674-hd4xm   1/1     Running   0          17m
pod/release-1-ccit-67c74cd674-pqt2r   1/1     Running   0          44s
pod/release-1-ccit-67c74cd674-rdv6l   1/1     Running   0          44s
pod/release-1-ccit-67c74cd674-zg694   1/1     Running   0          17m

NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)        AGE
service/kubernetes       ClusterIP      100.64.0.1       <none>                                                                    443/TCP        87m
service/release-1-ccit   LoadBalancer   100.67.212.244   a4b72e79b052941108ae11c17eee7397-1886167053.eu-west-2.elb.amazonaws.com   80:30163/TCP   17m

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/release-1-ccit   4/4     4            4           17m

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/release-1-ccit-67c74cd674   4         4         4       17m

Step7: Update the image now ,  value.yaml file 
image:
  repository: vakatisubbu/movies


Step8: Now change the image to vakatisubbu/recharge
[root@ip-10-0-0-29 ccit]# helm upgrade release-1 .
Release "release-1" has been upgraded. Happy Helming!
NAME: release-1
LAST DEPLOYED: Thu Aug 28 12:14:09 2025
NAMESPACE: default
STATUS: deployed
REVISION: 4
NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch its status by running 'kubectl get --namespace default svc -w release-1-ccit'
  export SERVICE_IP=$(kubectl get svc --namespace default release-1-ccit --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
  echo http://$SERVICE_IP:80

Step8: History for the release-1
[root@ip-10-0-0-29 ccit]# helm history release-1
REVISION        UPDATED                         STATUS          CHART           APP VERSION     DESCRIPTION
1               Thu Aug 28 11:27:06 2025        superseded      ccit-0.1.0      latest          Install complete
2               Thu Aug 28 11:43:52 2025        superseded      ccit-0.1.0      latest          Upgrade complete
3               Thu Aug 28 11:48:21 2025        superseded      ccit-0.1.0      latest          Upgrade complete
4               Thu Aug 28 12:14:09 2025        deployed        ccit-0.1.0      latest          Upgrade complete

Step9: See above all are revision version,you can rollback one version to another version 

[root@ip-10-0-0-29 ccit]# helm rollback release-1 2
Rollback was a success! Happy Helming!
[root@ip-10-0-0-29 ccit]# helm history release-1
REVISION        UPDATED                         STATUS          CHART           APP VERSION     DESCRIPTION
1               Thu Aug 28 11:27:06 2025        superseded      ccit-0.1.0      latest          Install  complete (Train)
2               Thu Aug 28 11:43:52 2025        superseded      ccit-0.1.0      latest          Upgrade complete(Train) replica 4
3               Thu Aug 28 11:48:21 2025        superseded      ccit-0.1.0      latest          Upgrade complete(Movies)
4               Thu Aug 28 12:14:09 2025        superseded      ccit-0.1.0      latest          Upgrade complete(Recharge)
5               Thu Aug 28 12:18:28 2025        deployed        ccit-0.1.0      latest          Rollback to 2

Rollback to version 2 train with replica 4 
Rollback to version to 4 now movies 

[root@ip-10-0-0-29 ccit]# helm history release-1
REVISION        UPDATED                         STATUS          CHART           APP VERSION     DESCRIPTION
1               Thu Aug 28 11:27:06 2025        superseded      ccit-0.1.0      latest          Install complete
2               Thu Aug 28 11:43:52 2025        superseded      ccit-0.1.0      latest          Upgrade complete
3               Thu Aug 28 11:48:21 2025        superseded      ccit-0.1.0      latest          Upgrade complete
4               Thu Aug 28 12:14:09 2025        superseded      ccit-0.1.0      latest          Upgrade complete
5               Thu Aug 28 12:18:28 2025        superseded      ccit-0.1.0      latest          Rollback to 2
6               Thu Aug 28 12:25:15 2025        deployed        ccit-0.1.0      latest          Rollback to 4

Uninstall 
[root@ip-10-0-0-29 ccit]# helm uninstall release-1
release "release-1" uninstalled
remove ccit folder 

If you are difficult to manage your own manifestfiles also the helm ,delete template folder all existing manifest files instead of them ,created you own manifest files,svc files
Step1:
[root@ip-10-0-0-29 ~]#  helm create devops
Creating devops
[root@ip-10-0-0-29 ~]# cd devops
[root@ip-10-0-0-29 devops]# ls
Chart.yaml  charts  templates  values.yaml
[root@ip-10-0-0-29 devops]# rm -rf  templates charts values.yaml

[root@ip-10-0-0-29 devops]# ls
Chart.yaml
[root@ip-10-0-0-29 devops]# vi manifest.yaml
[root@ip-10-0-0-29 devops]# vi svc.yaml
[root@ip-10-0-0-29 devops]# vi Chart.yaml
[root@ip-10-0-0-29 devops]# helm install subbu-1 .
NAME: subbu-1
LAST DEPLOYED: Thu Aug 28 12:41:18 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

[root@ip-10-0-0-29 devops]# kubectl get all
NAME        READY   STATUS    RESTARTS     AGE
pod/pod-1   1/1     Running   1 (6s ago)   13m

NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)        AGE
service/kubernetes   ClusterIP      100.64.0.1       <none>                                                                   443/TCP        3h3m
service/myservice    LoadBalancer   100.71.148.165   a41d28df204c5426783ca08cf7c1a06e-188168759.eu-west-2.elb.amazonaws.com   80:30629/TCP   13m
Real time we are using values.yaml only 

Argocd
https://github.com/devops0014/all-setups/blob/master/argocd.sh

Git :we can called sem tool
mvn:Build tool
Jenkins: ci/cd tool
argocd:git ops tool 

GitOps is a way of managing software infrastructure and deployments using
Git as the source of truth.
Git as the Source of Truth: In GitOps, all our configurations like (deployments,
services, secrets etc..) are stored in git repository.
Automated Processes: Whenever we make any changes in those YAML files
gitops tools like (Argo CD/ Flux) will detects and apply those changes on
kubernetes cluster. It ensures that the live infrastructure matches the
configurations in the Git repository.
Here, we can clearly observe that continuous deployment. Whenever we
make any changes in git, it will automatically reflects in kubernetes cluster.


Step1:

[root@ip-10-0-0-29 devops]# kubectl create ns argocd
namespace/argocd created

[root@ip-10-0-0-29 ]#kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl get all -n argocd

See these are service created 

[root@ip-10-0-0-29 ~]# kubectl get all -n argocd
NAME                                                    READY   STATUS    RESTARTS   AGE
pod/argocd-application-controller-0                     1/1     Running   0          4m28s
pod/argocd-applicationset-controller-64b8948d8b-wwpcp   1/1     Running   0          4m29s
pod/argocd-dex-server-6f48b6c5c7-qr9h8                  1/1     Running   0          4m29s
pod/argocd-notifications-controller-6c4547fb9c-mxl4l    1/1     Running   0          4m29s
pod/argocd-redis-78b9ff5487-kkp9r                       1/1     Running   0          4m28s
pod/argocd-repo-server-67d8c6bbf6-x77gd                 1/1     Running   0          4m28s
pod/argocd-server-577756d78b-x26gx                      1/1     Running   0          4m28s

NAME                                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/argocd-applicationset-controller          ClusterIP   100.65.179.160   <none>        7000/TCP,8080/TCP            4m29s
service/argocd-dex-server                         ClusterIP   100.67.105.248   <none>        5556/TCP,5557/TCP,5558/TCP   4m29s
service/argocd-metrics                            ClusterIP   100.65.16.224    <none>        8082/TCP                     4m29s
service/argocd-notifications-controller-metrics   ClusterIP   100.67.55.85     <none>        9001/TCP                     4m29s
service/argocd-redis                              ClusterIP   100.64.151.140   <none>        6379/TCP                     4m29s
service/argocd-repo-server                        ClusterIP   100.69.214.207   <none>        8081/TCP,8084/TCP            4m29s
service/argocd-server                             ClusterIP   100.66.122.126   <none>        80/TCP,443/TCP               4m29s
service/argocd-server-metrics                     ClusterIP   100.68.60.26     <none>        8083/TCP                     4m29s

NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/argocd-applicationset-controller   1/1     1            1           4m29s
deployment.apps/argocd-dex-server                  1/1     1            1           4m29s
deployment.apps/argocd-notifications-controller    1/1     1            1           4m29s
deployment.apps/argocd-redis                       1/1     1            1           4m29s
deployment.apps/argocd-repo-server                 1/1     1            1           4m28s
deployment.apps/argocd-server                      1/1     1            1           4m28s

NAME                                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/argocd-applicationset-controller-64b8948d8b   1         1         1       4m29s
replicaset.apps/argocd-dex-server-6f48b6c5c7                  1         1         1       4m29s
replicaset.apps/argocd-notifications-controller-6c4547fb9c    1         1         1       4m29s
replicaset.apps/argocd-redis-78b9ff5487                       1         1         1       4m29s
replicaset.apps/argocd-repo-server-67d8c6bbf6                 1         1         1       4m28s
replicaset.apps/argocd-server-577756d78b                      1         1         1       4m28s

NAME                                             READY   AGE
statefulset.apps/argocd-application-controller   1/1     4m28s

Step2: To access these service we have argocd dashboad, execute this command 

[root@ip-10-0-0-29 ~]# kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
yum install jq -y
export ARGOCD_SERVER='kubectl get svc argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname''
echo $ARGOCD_SERVER
kubectl get svc argocd-server -n argocd -o json | jq --raw-output .status.loadBalancer.ingress[0].hostname
service/argocd-server patched
Amazon Linux 2023 repository                                                                                              69 kB/s | 3.6 kB     00:00
Amazon Linux 2023 Kernel Livepatch repository                                                                             58 kB/s | 2.9 kB     00:00
Package jq-1.7.1-50.amzn2023.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
kubectl get svc argocd-server -n argocd -o json | jq --raw-output .status.loadBalancer.ingress[0].hostname
null

[root@ip-10-0-0-29 ~]# kubectl get svc argocd-server -n argocd -o json | jq --raw-output .status.loadBalancer.ingress[0].hostname
a0c3024346c62444f9d55b6055cf9f15-880560546.eu-west-2.elb.amazonaws.com

Step3:password ,we need get using command ,password yellow color


[root@ip-10-0-0-29 ~]# export ARGO_PWD='kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d'
echo $ARGO_PWD
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
ijU6rjPrCcyul5tv
Step4: After login click +New app


[root@ip-10-0-0-29 ~]# kubectl create  ns dev
namespace/dev created


Step5: cluster url will come automatically ,namespace dev created given click create

Step6: it is synchrize to git repo healthy


As see here deployment and services,replica set three pods created

Using command

[root@ip-10-0-0-29 ~]# kubectl get po -n dev
NAME                      READY   STATUS    RESTARTS   AGE
my-app-5fb94777b5-9m7gs   1/1     Running   0          5m45s
my-app-5fb94777b5-c5tm4   1/1     Running   0          5m45s
my-app-5fb94777b5-pbps6   1/1     Running   0          5m45s

See here application opened successfully.


Step7: now planning to change image in git hub , need to check automaticly gitrepo 
argocd deployed or not 

Step8: deployment.yml
 from image: shaikmustafa/paytm:bus to image: shaikmustafa/cycle

See here synchroized automatically ,new replica set was created , from there new pod created.

Step9: See here automatically new application 


 kops delete cluster --name kopsclstr.k8s.local --yes

--Thanks 


No comments:

Post a Comment