Friday, August 15, 2025

Kubernetes part5

Kubernetes part5

Class 89th Kubernetes Part5 August 15th 

Step1: Install Kops 

[root@ip-10-0-0-22 ~]#   curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64

Install kubectl
[root@ip-10-0-0-22 ~]# kubectl version
Client Version: v1.33.4
Kustomize Version: v5.6.0
The connection to the server localhost:8080 was refused - did you specify the right host or port?

[root@ip-10-0-0-22 ~]# aws s3 ls
2025-08-16 16:58:08 ccitpublicbucket16

[root@ip-10-0-0-22 ~]# export KOPS_STATE_STORE=s3://ccitpublicbucket16

Step2: Create the cluster
[root@ip-10-0-0-22 ~]# kops create cluster --name kopsclstr.k8s.local --zones eu-west-2b,eu-west-2a --master-count 1 --master-size c7i-flex.large --master-volume-size 20 --node-count 2 --node-size t3.micro --node-volume-size=15 --image=ami-044415bb13eee2391

[root@ip-10-0-0-22 ~]#kops update cluster --name kopsclstr.k8s.local --yes --admin

Step3: Deployment file create for manifest,
[root@ip-10-0-0-22 ~]# cat manifest.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
spec:
  replicas: 2  # Fixed: lowercase and proper spacing
  selector:    # Proper selector definition
    matchLabels:
      app: zamoto
  template:    # Moved to top-level under spec
    metadata:
      labels:
        app: zamoto  # Must match selector
    spec:
      containers:
      - name: container1
        image: shaikmustafa/mygame
        ports:
        - containerPort: 80

[root@ip-10-0-0-22 ~]# cat service.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  type: LoadBalancer
  selector:
    app: zamoto
  ports:
    - port: 80
      targetPort: 80
Step4: So far deployment
pod-->container 
rc/rs -->pod-->contrainer
deploy -->rs-->pod -->container 

Deployment feature: autoscaling,need to provide these information while deployment,if pod usage 60 % percentage usage pod will create, if usage is less pod will reduced

  1. replication 
  2. Scaling(manuall,autoscaling)  Autoscalimg: 
    desired:3
    min:2
    max10
  3. Rollback to any version
  4. no downtime
Step5: All instance created successfully.
[root@ip-10-0-0-22 ~]#  kops validate cluster --wait 10m
[ec2-user@ip-10-0-0-22 ~]$ kubectl get nodes
NAME                  STATUS   ROLES           AGE     VERSION
i-012484f05449c29f6   Ready    control-plane   3m36s   v1.32.4
i-0c9c5c020ce9ab366   Ready    node            73s     v1.32.4
i-0e71b4d0cc69c02c6   Ready    node            85s     v1.32.4
Step6: Using below command execute manifest file, 

[root@ip-10-0-0-22 ~]# kubectl create -f .
deployment.apps/mydeploy created
service/myservice created

Step7: Deployment create , along with replicaset will come
[root@ip-10-0-0-22 ~]# kubectl get deploy
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
mydeploy   2/2     2            2           47s

[root@ip-10-0-0-22 ~]# kubectl get rs
NAME                  DESIRED   CURRENT   READY   AGE
mydeploy-77d4499fd4   2         2         2       104s

Step8: list of Pods 
[root@ip-10-0-0-22 ~]# kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
mydeploy-77d4499fd4-7j2xs   1/1     Running   0          2m6s
mydeploy-77d4499fd4-kldnq   1/1     Running   0          2m6s

Step9:list of Services
[root@ip-10-0-0-22 ~]# kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)        AGE
kubernetes   ClusterIP      100.64.0.1      <none>                                                                    443/TCP        6m29s
myservice    LoadBalancer   100.71.88.229   a8ba63cae87274680944e81acc4a3da6-2050377277.eu-west-2.elb.amazonaws.com   80:30315/TCP   2m56s

Step10: loadbalancer  opened 
Step11:
[root@ip-10-0-0-22 ~]# kubectl get deploy
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
mydeploy   2/2     2            2           11m
[root@ip-10-0-0-22 ~]# kubectl describe deploy
Name:                   mydeploy
Namespace:              default
CreationTimestamp:      Sun, 17 Aug 2025 13:31:01 +0000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=zamoto
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=zamoto
  Containers:
   container1:
    Image:         shaikmustafa/mygame
    Port:          80/TCP
    Host Port:     0/TCP
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   mydeploy-77d4499fd4 (2/2 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  11m   deployment-controller  Scaled up replica set mydeploy-77d4499fd4 from 0 to 2

Test our features 
No down time
Step12: Change the deployment file  image changed 
[root@ip-10-0-0-22 ~]# vi manifest.yaml

image: shaikmustafa/dm

[root@ip-10-0-0-22 ~]# kubectl apply -f manifest.yaml
Warning: resource deployments/mydeploy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
deployment.apps/mydeploy configured

See without downtime ,deployed changed to another webpage


Step13: Though command also we can change  image
[root@ip-10-0-0-22 ~]# kubectl set image deploy mydeploy container1=shaikmustafa/cycle
deployment.apps/mydeploy image updated


Step14: Below command rollout can able check ,image deployed or not 
[root@ip-10-0-0-22 ~]# kubectl rollout status deploy mydeploy
deployment "mydeploy" successfully rolled out
Step15: shaikmustafa/paytm:movie
[root@ip-10-0-0-22 ~]# kubectl set image deploy mydeploy container1=shaikmustafa/paytm:movie
deployment.apps/mydeploy image updated
[root@ip-10-0-0-22 ~]# kubectl rollout status deploy mydeploy
Waiting for deployment "mydeploy" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "mydeploy" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "mydeploy" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination...
deployment "mydeploy" successfully rolled out
[root@ip-10-0-0-22 ~]#

Step16:
Deployment -->RS 1--> 3 pods created First game
After Image updated --RS2 --> 1 pod will create after successfully deployed, remove one pod in Rs1, Same way 2 pod will create after successfully deployed , remove second pod  in Rs1
 Same way 3 prod will create after successfully deployed ,demote third pod, 
all pods deployed into new version application.
This called this strategy  in technically "rolling update strategy"



Step17: Planning to rollback the application 

After change Replicateset , previous set are not delete it should be presist , you can able to rollback 
previous application ,now below one is active one , now the application is paytm movie,
Planning to change Rs3 desire rollback 

[root@ip-10-0-0-22 ~]# kubectl get rs
NAME                  DESIRED   CURRENT   READY   AGE
mydeploy-547d4c47c7   0         0         0       50m
mydeploy-5f87bb8854   0         0         0       36m
mydeploy-77d4499fd4   0         0         0       65m
mydeploy-8565b4b985   2         2         2       30m

Undo command 
Here 3 is version 
[root@ip-10-0-0-22 ~]# kubectl rollout undo deployment mydeploy --to-revision=3
deployment.apps/mydeploy rolled back

[root@ip-10-0-0-22 ~]# kubectl rollout status deploy mydeploy
deployment "mydeploy" successfully rolled out
Application change to cycle


See here rs active state changed  from mydeploy-8565b4b985  to  mydeploy-5f87bb8854
[root@ip-10-0-0-22 ~]# kubectl  get rs
NAME                  DESIRED   CURRENT   READY   AGE
mydeploy-547d4c47c7   0         0         0       58m
mydeploy-5f87bb8854   2         2         2       44m
mydeploy-77d4499fd4   0         0         0       73m
mydeploy-8565b4b985   0         0         0       38m

This command will show like 1 pod i creating another pod is terminiated
-w mean watch

[root@ip-10-0-0-22 ~]# kubectl  get  po -w
NAME                        READY   STATUS              RESTARTS   AGE
mydeploy-5f87bb8854-b9rp7   0/1     ContainerCreating   0          0s
mydeploy-5f87bb8854-mrbws   1/1     Running             0          3s
mydeploy-8565b4b985-7pv2w   1/1     Terminating         0          57s
mydeploy-8565b4b985-d4lrj   1/1     Running             0          55s
mydeploy-8565b4b985-7pv2w   0/1     Completed           0          58s
mydeploy-8565b4b985-7pv2w   0/1     Completed           0          59s
mydeploy-8565b4b985-7pv2w   0/1     Completed           0          59s
mydeploy-5f87bb8854-b9rp7   1/1     Running             0          3s
mydeploy-8565b4b985-d4lrj   1/1     Terminating         0          58s

See here i have change version 4 

[root@ip-10-0-0-22 ~]#  kubectl rollout undo deployment mydeploy --to-revision=4
deployment.apps/mydeploy rolled back
[root@ip-10-0-0-22 ~]# kubectl  get  po -w
NAME                        READY   STATUS    RESTARTS   AGE
mydeploy-8565b4b985-7pv2w   1/1     Running   0          11s
mydeploy-8565b4b985-d4lrj   1/1     Running   0          9s

Planning to back to version 2 ,it should not available version 3 will change version 5

[root@ip-10-0-0-22 ~]#  kubectl rollout undo deployment mydeploy --to-revision=3
error: unable to find specified revision 3 in history

[root@ip-10-0-0-22 ~]#  kubectl rollout undo deployment mydeploy --to-revision=5
deployment.apps/mydeploy rolled back




Now the current application is Cycle 



Step18: see here history of deployment version 3 was removed 
[root@ip-10-0-0-22 ~]# kubectl rollout history deploy mydeploy
deployment.apps/mydeploy
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
6         <none>
7         <none>


[root@ip-10-0-0-22 ~]#  kubectl rollout undo deployment mydeploy --to-revision=2
deployment.apps/mydeploy rolled back
[root@ip-10-0-0-22 ~]# kubectl  get  po -w
NAME                        READY   STATUS              RESTARTS   AGE
mydeploy-547d4c47c7-4cmnc   1/1     Running             0          4s
mydeploy-547d4c47c7-tfp6p   0/1     ContainerCreating   0          1s
mydeploy-5f87bb8854-b9rp7   1/1     Running             0          9m29s
mydeploy-547d4c47c7-tfp6p   1/1     Running             0          2s
mydeploy-5f87bb8854-b9rp7   1/1     Terminating         0          9m31s
mydeploy-5f87bb8854-b9rp7   0/1     Completed           0          9m32s
mydeploy-5f87bb8854-b9rp7   0/1     Completed           0          9m33s
mydeploy-5f87bb8854-b9rp7   0/1     Completed           0          9m33s

[root@ip-10-0-0-22 ~]# kubectl rollout history deploy mydeploy
deployment.apps/mydeploy
REVISION  CHANGE-CAUSE
1         <none>
6         <none>
7         <none>
8         <none>


Step19: delete the pod , need to check automatically created or not 

See  after deleted all pod ,created pod 5 sec automatically

[root@ip-10-0-0-22 ~]# kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
mydeploy-547d4c47c7-4cmnc   1/1     Running   0          3m25s
mydeploy-547d4c47c7-tfp6p   1/1     Running   0          3m22s
[root@ip-10-0-0-22 ~]# kubectl delete pod --all
pod "mydeploy-547d4c47c7-4cmnc" deleted
pod "mydeploy-547d4c47c7-tfp6p" deleted
[root@ip-10-0-0-22 ~]# kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
mydeploy-547d4c47c7-b22bc   1/1     Running   0          5s
mydeploy-547d4c47c7-t9x4z   1/1     Running   0          5s

Step20: manually scalein , first increase the pods

[root@ip-10-0-0-22 ~]# cat manifest.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
spec:
  replicas: 4  # Fixed: lowercase and proper spacing
  selector:    # Proper selector definition
    matchLabels:
      app: zamoto
  template:    # Moved to top-level under spec
    metadata:
      labels:
        app: zamoto  # Must match selector
    spec:
      containers:
      - name: container1
        image: shaikmustafa/dm
        ports:
        - containerPort: 80

[root@ip-10-0-0-22 ~]# kubectl apply -f manifest.yaml
deployment.apps/mydeploy configured
[root@ip-10-0-0-22 ~]# kubectl get po
NAME                        READY   STATUS    RESTARTS   AGE
mydeploy-547d4c47c7-b22bc   1/1     Running   0          4m45s
mydeploy-547d4c47c7-sgkc7   1/1     Running   0          9s
mydeploy-547d4c47c7-t9x4z   1/1     Running   0          4m45s
mydeploy-547d4c47c7-zstz8   1/1     Running   0          9s


Scaledown using command:

[root@ip-10-0-0-22 ~]# kubectl scale deploy mydeploy  --replicas 2
deployment.apps/mydeploy scaled
[root@ip-10-0-0-22 ~]# kubectl get po
NAME                        READY   STATUS    RESTARTS   AGE
mydeploy-547d4c47c7-b22bc   1/1     Running   0          5m55s
mydeploy-547d4c47c7-t9x4z   1/1     Running   0          5m55s


Step21: Auto scaling ,Prod metric 

metric for the server 

https://kubernetes-sigs.github.io/metrics-server/

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability.yaml





[root@ip-10-0-0-22 ~]# curl -LO https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  4871  100  4871    0     0  16938      0 --:--:-- --:--:-- --:--:-- 16938
[root@ip-10-0-0-22 ~]# sed -i 's|policy/v1beta1|policy/v1|g' high-availability.yaml
[root@ip-10-0-0-22 ~]# kubectl apply -f high-availability.yaml
serviceaccount/metrics-server unchanged
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged
clusterrole.rbac.authorization.k8s.io/system:metrics-server unchanged
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server unchanged
service/metrics-server unchanged
deployment.apps/metrics-server configured
poddisruptionbudget.policy/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged

Below command metric-server installed or not using below command 
[root@ip-10-0-0-22 ~]# kubectl get deployment metrics-server -n kube-system
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
metrics-server   2/2     2            2           5m39s


[root@ip-10-0-0-22 ~]# cat hp.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
    name: nginx-hpa
spec:
    scaleTargetRef:
        apiVersion: apps/v1
        kind: Deployment
        name: mydeploy
    minReplicas: 1
    maxReplicas: 5
    metrics:
    - type: Resource
      resource:
        name: cpu
        target:
            type: Utilization
            averageUtilization: 50  # Target 50% CPU utilization

Deployment created ,autoscaling will work once 50 % cpu ultization, so you need give command stress to server.
[root@ip-10-0-0-22 ~]# kubectl create -f hp.yaml
horizontalpodautoscaler.autoscaling/nginx-hpa created


Step22: Go to inside the pod 
[root@ip-10-0-0-22 ~]# kubectl get po
NAME                        READY   STATUS    RESTARTS   AGE
mydeploy-547d4c47c7-b22bc   1/1     Running   0          25m
mydeploy-547d4c47c7-t9x4z   1/1     Running   0          25m
[root@ip-10-0-0-22 ~]# kubectl exec -it mydeploy-547d4c47c7-t9x4z -- bash
root@mydeploy-547d4c47c7-t9x4z:/#

Step23:
root@mydeploy-547d4c47c7-t9x4z:/# apt update -y

root@mydeploy-547d4c47c7-t9x4z:/# apt install stress -y

root@mydeploy-547d4c47c7-t9x4z:/# stress -c 2 400 -v
stress: FAIL: [305] (244) unrecognized option: 400
root@mydeploy-547d4c47c7-t9x4z:/# stress -c 2 -t 400 -v
stress: info: [306] dispatching hogs: 2 cpu, 0 io, 0 vm, 0 hdd
stress: dbug: [306] using backoff sleep of 6000us
stress: dbug: [306] setting timeout to 400s
stress: dbug: [306] --> hogcpu worker 2 [307] forked
stress: dbug: [306] using backoff sleep of 3000us
stress: dbug: [306] setting timeout to 400s
stress: dbug: [306] --> hogcpu worker 1 [308] forked

Step24: See below previous 2 pods only, after stress server 5 pods automatically created
See here 5 pods created


Step25: Once you cancel the stress automatically pod will reduced, it will take time 

Delete the cluster:
[root@ip-10-0-0-22 ~]# export KOPS_STATE_STORE=s3://ccitpublicbucket16
[root@ip-10-0-0-22 ~]# kops delete cluster --name kopsclstr.k8s.local --yes

Deleted kubectl config for kopsclstr.k8s.local

Deleted cluster: "kopsclstr.k8s.local"



--Thanks 


No comments:

Post a Comment