Thursday, August 14, 2025

Kubernetes Part4

Kubernetes Part4

Class 88th Kubernetes Part4 August 14th 

Replication and Deployment

Step1:Create one Ec2  machine, For Install kubectl, install the cluster Kops

go though part3 

https://kops.sigs.k8s.io/getting_started/install/

[root@ip-10-0-0-22 ~]# kops version

Client version: 1.33.0 (git-v1.33.0)

Step2: Replication controller, This one of the cluster when every delete the pod ,automatically create one more pod this Kubernetes give feature.

Step3: Create Cluster 

 [root@ip-10-0-0-22 ~]# kops create cluster --name kopsclstr.k8s.local --zones eu-west-2b,eu-west-2a --master-count 1 --master-size c7i-flex.large --master-volume-size 20 --node-count 2 --node-size t3.micro --node-volume-size=15 --image=ami-044415bb13eee2391

[root@ip-10-0-0-22 ~]# kops update cluster --name kopsclstr.k8s.local --yes --admin

--List of cluster

[root@ip-10-0-0-22 ~]# kops get cluster

NAME                    CLOUD   ZONES

kopsclstr.k8s.local     aws     eu-west-2a,eu-west-2b

The Draw Back

In the above both methos we can be able create a pod but what if we delete the pod?

One you delete thw pod we cannot able to access the pod, so it will create a lot difficulty in real time.

Note: Self healing is not available here.

To Overcome this on we use some kubernetes components called RC,RS, 

Deployment ,Deamon SETS ,etc..

(Pod object, RC replication control, RS Replication sets)

REPLICAS IN KUBERNETES:

·       Before Kubernetes, other tools did not provide important and customized feature like scaling and replication.

·       When Kubernetes was introduced, replication and scaling were the premium features that increased the popularity of this container orchestration tool.

·       Replication means that if the pods desired state is set to  3 and whatever any pod fails, this will lead to a reduction in the downtime of the application.

·       Scaling means if the load becomes increases on the application, then Kubernetes increase the number of pods according to the load on the application

  • ReplicationController is an object in kubernetes that was introduced in v1 of kubernetes which helps to meet the desired state of the kubernetes cluster from the current state.Replicationcontroller works on requality-base controllers only.
  • ReplicaSet is an object in Kubernetes and it is an advanced version of ReplicationController. ReplicaSet works on the Equality-base controllers and set-base Controllers.

Step3: First We create the RC, RC will create the Pods ,we need inform the RC how many pods are required ,RC maintained the Pods every 5 sec it will check ,if any pod went down replication controller 

automatically create the Pod , We can create replicationController Two ways 

1.Imperative 2.Declarative (manifest file) we use this real time

[root@ip-10-0-0-22 ~]# export KOPS_STATE_STORE=s3://ccitpublicbucket1

Part1: Replicationcontroller with name myrc

Part2: pod creation, Replicationcontroller specification, count 3 , selector is select the label using template created metadata  added the labels, for every pod one label will create

Part3: Container 

root@ip-10-0-0-22 ~]# vi manifest.yaml

[root@ip-10-0-0-22 ~]# cat manifest.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: myrc
spec:
  replicas: 3  # Fixed: lowercase and proper spacing
  selector:    # Proper selector definition
    app: swiggy
  template:    # Moved to top-level under spec
    metadata:
      labels:
        app: swiggy  # Must match selector
    spec:
      containers:
      - name: container1
        image: shaikmustafa/cycle
        ports:
        - containerPort: 80
Step4: As you observer myrc name of replicate some random name attached 
Before
[root@ip-10-0-0-22 ~]# kubectl get pods
No resources found in default namespace.

[root@ip-10-0-0-22 ~]# kubectl create -f manifest.yaml

replicationcontroller/myrc created
[root@ip-10-0-0-22 ~]# cat manifest.yaml
After :
[root@ip-10-0-0-22 ~]# kubectl get pods
NAME         READY   STATUS    RESTARTS   AGE
myrc-bvch9   1/1     Running   0          2m54s
myrc-g6w5t   1/1     Running   0          2m54s
myrc-h58kj   1/1     Running   0          2m54s
[root@ip-10-0-0-22 ~]#
[root@ip-10-0-0-22 ~]# kubectl get rc
NAME   DESIRED   CURRENT   READY   AGE
myrc   3         3         3       3m41s

Step5: See Deleted one pod myrc-bvch9, automatically 9sec ,created one more pod 

[root@ip-10-0-0-22 ~]# kubectl delete pod myrc-bvch9
pod "myrc-bvch9" deleted
[root@ip-10-0-0-22 ~]# kubectl get pods --show-labels
NAME         READY   STATUS    RESTARTS   AGE     LABELS
myrc-g6w5t   1/1     Running   0          7m53s   app=swiggy
myrc-h58kj   1/1     Running   0          7m53s   app=swiggy
myrc-mmbts   1/1     Running   0          9s    app=swiggy

Step6 : Delete all pods ,see 4s create all pods automatically
[root@ip-10-0-0-22 ~]# kubectl delete pod --all
pod "myrc-g6w5t" deleted
pod "myrc-h58kj" deleted
pod "myrc-mmbts" deleted
[root@ip-10-0-0-22 ~]# kubectl get pods --show-labels
NAME         READY   STATUS    RESTARTS   AGE   LABELS
myrc-ccctm   1/1     Running   0          4s    app=swiggy
myrc-x4slc   1/1     Running   0          4s    app=swiggy
myrc-zxwnd   1/1     Running   0          4s    app=swiggy

Expose the Pods , Need to create Service file 

Step7: Service created 
[root@ip-10-0-0-22 ~]# vi service.yaml
[root@ip-10-0-0-22 ~]# cat service.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  type: LoadBalancer
  selector:
    app: swiggy
  ports:
    - port: 80
      targetPort: 80
[root@ip-10-0-0-22 ~]# kubectl create -f service.yaml
service/myservice created

Step8: See below stable ip and loadbalancer dns came automatically 
--List of services 
[root@ip-10-0-0-22 ~]# kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP                                                              PORT(S)        AGE
kubernetes   ClusterIP      100.64.0.1      <none>                                                                   443/TCP        29m
myservice    LoadBalancer   100.66.58.204   ae126a2383e494e72af8aa2c68283098-161964620.eu-west-2.elb.amazonaws.com   80:31062/TCP   52s


Step9: let try delete all pod and try to access the application, even after create able to access the application 

Step10: if you want to create more pods, you need modify the replicate count change,

technical we can called pods increasing scalein, decreaing scaleout 

[root@ip-10-0-0-22 ~]# vi manifest.yaml

  replicas: 5  # Fixed: lowercase and proper spacing

  [root@ip-10-0-0-22 ~]# kubectl apply -f manifest.yaml

Warning: resource replicationcontrollers/myrc is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.

replicationcontroller/myrc configured

[root@ip-10-0-0-22 ~]# kubectl get pods --show-labels

NAME         READY   STATUS    RESTARTS   AGE     LABELS

myrc-6kht8   1/1     Running   0          7m23s   app=swiggy

myrc-7pvh6   1/1     Running   0          21s     app=swiggy

myrc-88kv4   1/1     Running   0          21s     app=swiggy

myrc-jhg8d   1/1     Running   0          7m23s   app=swiggy

myrc-ljcpb   1/1     Running   0          7m23s   app=swiggy

Step11: Decrease  Scaledown
[root@ip-10-0-0-22 ~]# vi manifest.yaml
[root@ip-10-0-0-22 ~]# kubectl apply -f manifest.yaml
replicationcontroller/myrc configured
[root@ip-10-0-0-22 ~]# kubectl get pods --show-labels
NAME         READY   STATUS    RESTARTS   AGE   LABELS
myrc-6kht8   1/1     Running   0          10m   app=swiggy
myrc-jhg8d   1/1     Running   0          10m   app=swiggy
myrc-ljcpb   1/1     Running   0          10m   app=swiggy

Step12: Using command also we can increase the pods 

[root@ip-10-0-0-22 ~]# kubectl scale rc myrc --replicas 4
replicationcontroller/myrc scaled
[root@ip-10-0-0-22 ~]# kubectl get pods --show-labels
NAME         READY   STATUS    RESTARTS   AGE   LABELS
myrc-6kht8   1/1     Running   0          13m   app=swiggy
myrc-hl89f   1/1     Running   0          5s    app=swiggy
myrc-jhg8d   1/1     Running   0          13m   app=swiggy
myrc-ljcpb   1/1     Running   0          13m   app=swiggy
[root@ip-10-0-0-22 ~]#

Step13: Decrease 
[root@ip-10-0-0-22 ~]# kubectl scale rc myrc --replicas 1
replicationcontroller/myrc scaled
[root@ip-10-0-0-22 ~]# kubectl get pods --show-labels
NAME         READY   STATUS    RESTARTS   AGE   LABELS
myrc-6kht8   1/1     Running   0          14m   app=swiggy

Step14: full information of rc ,events 
[root@ip-10-0-0-22 ~]# kubectl describe rc myrc
Name:         myrc
Namespace:    default
Selector:     app=swiggy
Labels:       app=swiggy
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed

Pod Template:
  Labels:  app=swiggy
  Containers:
   container1:
    Image:         shaikmustafa/cycle
    Port:          80/TCP
    Host Port:     0/TCP
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Events:
  Type    Reason            Age                 From                    Message
  ----    ------            ----                ----                    -------
  Normal  SuccessfulCreate  32m                 replication-controller  Created pod: myrc-g6w5t
  Normal  SuccessfulCreate  32m                 replication-controller  Created pod: myrc-bvch9
  Normal  SuccessfulCreate  32m                 replication-controller  Created pod: myrc-h58kj
  Normal  SuccessfulCreate  26m                 replication-controller  Created pod: myrc-mmbts
  Normal  SuccessfulCreate  23m                 replication-controller  Created pod: myrc-ccctm
  Normal  SuccessfulCreate  23m                 replication-controller  Created pod: myrc-x4slc
  Normal  SuccessfulCreate  23m                 replication-controller  Created pod: myrc-zxwnd
  Normal  SuccessfulCreate  15m                 replication-controller  Created pod: myrc-ljcpb
  Normal  SuccessfulCreate  15m                 replication-controller  Created pod: myrc-6kht8
  Normal  SuccessfulDelete  4m39s               replication-controller  Deleted pod: myrc-88kv4
  Normal  SuccessfulDelete  4m39s               replication-controller  Deleted pod: myrc-7pvh6
  Normal  SuccessfulCreate  2m9s (x4 over 15m)  replication-controller  (combined from similar events): Created pod: myrc-hl89f
  Normal  SuccessfulDelete  75s                 replication-controller  Deleted pod: myrc-hl89f
  Normal  SuccessfulDelete  75s                 replication-controller  Deleted pod: myrc-jhg8d
  Normal  SuccessfulDelete  75s                 replication-controller  Deleted pod: myrc-ljcpb

Advantage :replicate and scaling 
Drawback if this: it is equality based selector, not setbase selector 
 For ex:-  Equality based selector app: swiggy 
                 Set based selector app in swiggy ,zamoto

RC : at time we can access single label, to over come this issue need to use RS(ReplicateSet)

                                                               replicaSet
New version of Kuberenets 
Step1: Delete the existing my rc 

[root@ip-10-0-0-22 ~]# kubectl delete rc myrc
replicationcontroller "myrc" deleted
[root@ip-10-0-0-22 ~]# kubectl get rc
No resources found in default namespace.

--To Check  rs version apps/v1
[root@ip-10-0-0-22 ~]# kubectl api-resources | grep -i "rs"
NAME                                SHORTNAMES                          APIVERSION                        NAMESPACED   KIND
persistentvolumeclaims              pvc                                 v1                                true         PersistentVolumeClaim
persistentvolumes                   pv                                  v1                                false        PersistentVolume
replicationcontrollers              rc                                  v1                                true         ReplicationController
replicasets                         rs                                  apps/v1                           true         ReplicaSet
horizontalpodautoscalers            hpa                                 autoscaling/v2                    true         HorizontalPodAutoscaler
csidrivers                                                              storage.k8s.io/v1                 false        CSIDriver

Step2:

[root@ip-10-0-0-22 ~]# vi manifest.yaml
[root@ip-10-0-0-22 ~]# cat manifest.yaml
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myrs
spec:
  replicas: 2  # Fixed: lowercase and proper spacing
  selector:    # Proper selector definition
    matchLabels:
      app: swiggy
  template:    # Moved to top-level under spec
    metadata:
      labels:
        app: swiggy  # Must match selector
    spec:
      containers:
      - name: container1
        image: shaikmustafa/cycle
        ports:
        - containerPort: 80

[root@ip-10-0-0-22 ~]# kubectl create  -f manifest.yaml
replicaset.apps/myrs created
[root@ip-10-0-0-22 ~]# kubectl get rs
NAME   DESIRED   CURRENT   READY   AGE
myrs   2         2         2       97s


[root@ip-10-0-0-22 ~]# kubectl get pods --show-labels
NAME         READY   STATUS    RESTARTS   AGE     LABELS
myrs-dzz9s   1/1     Running   0          2m53s   app=swiggy
myrs-hqdpj   1/1     Running   0          2m53s   app=swiggy


Command  scaling 

Step3: Scalein
[root@ip-10-0-0-22 ~]# kubectl scale rs myrs  --replicas 4
replicaset.apps/myrs scaled
[root@ip-10-0-0-22 ~]# kubectl get pods --show-labels
NAME         READY   STATUS    RESTARTS   AGE     LABELS
myrs-dzz9s   1/1     Running   0          4m33s   app=swiggy
myrs-fjddd   1/1     Running   0          4s      app=swiggy
myrs-hqdpj   1/1     Running   0          4m33s   app=swiggy
myrs-zw2mp   1/1     Running   0          4s      app=swiggy

Step4:Scaleout

[root@ip-10-0-0-22 ~]# kubectl scale rs myrs  --replicas 1
replicaset.apps/myrs scaled
[root@ip-10-0-0-22 ~]# kubectl get pods --show-labels
NAME         READY   STATUS    RESTARTS   AGE     LABELS
myrs-dzz9s   1/1     Running   0          4m51s   app=swiggy

Full information of the RS
[root@ip-10-0-0-22 ~]#  kubectl describe rs myrs
Name:         myrs
Namespace:    default
Selector:     app=swiggy
Labels:       <none>
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=swiggy
  Containers:
   container1:
    Image:         shaikmustafa/cycle
    Port:          80/TCP
    Host Port:     0/TCP
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Events:
  Type    Reason            Age    From                   Message
  ----    ------            ----   ----                   -------
  Normal  SuccessfulCreate  7m     replicaset-controller  Created pod: myrs-hqdpj
  Normal  SuccessfulCreate  7m     replicaset-controller  Created pod: myrs-dzz9s
  Normal  SuccessfulCreate  2m31s  replicaset-controller  Created pod: myrs-zw2mp
  Normal  SuccessfulCreate  2m31s  replicaset-controller  Created pod: myrs-fjddd
  Normal  SuccessfulDelete  2m12s  replicaset-controller  Deleted pod: myrs-fjddd
  Normal  SuccessfulDelete  2m12s  replicaset-controller  Deleted pod: myrs-hqdpj
  Normal  SuccessfulDelete  2m12s  replicaset-controller  Deleted pod: myrs-zw2mp

Drawback 
manual scaling here no autscaling ,rollback, exist dowtime it will take some time delay to create pods

Deployment :
 Replica, manual scaling ,SBS, nodowntime

Step1:

[root@ip-10-0-0-22 ~]# cat manifest.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
spec:
  replicas: 2  # Fixed: lowercase and proper spacing
  selector:    # Proper selector definition
    matchLabels:
      app: swiggy
  template:    # Moved to top-level under spec
    metadata:
      labels:
        app: swiggy  # Must match selector
    spec:
      containers:
      - name: container1
        image: shaikmustafa/cycle
        ports:

Step2:

Deployment create successfully
[root@ip-10-0-0-22 ~]# kubectl create -f manifest.yaml
deployment.apps/mydeploy created

 below both are same 
[root@ip-10-0-0-22 ~]# kubectl get rs
NAME                  DESIRED   CURRENT   READY   AGE
mydeploy-5fc87dd57b   2         2         2       17s
[root@ip-10-0-0-22 ~]# kubectl get deploy
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
mydeploy   2/2     2            2           40s

Step3:
[root@ip-10-0-0-22 ~]# kops get cluster
NAME                    CLOUD   ZONES
kopsclstr.k8s.local     aws     eu-west-2a,eu-west-2b
[root@ip-10-0-0-22 ~]# kops delete cluster --name kopsclstr.k8s.local --yes

Deleted kubectl config for kopsclstr.k8s.local

Deleted cluster: "kopsclstr.k8s.local"


Based on  if you learn one ReplicateController remaing all are similar, RS,Deployment 
                                             Deamonset ,stateful
Sofar 5 manifest file learned(pods,service,rc,rs,deployment) total 9


Reference Document: 
https://mustafa-k8s.hashnode.dev/replication-controller-vs-replica-set

--Thanks 


No comments:

Post a Comment