Wednesday, August 13, 2025

Kubernetes part3

Kubernetes part3

Class 87th Kubernetes Part3 August 13th 

Multi node cluster 

Cluster (group of master and worker server called cluster)

Operator server/User server (kubectl) --> send request to create pod-->API Server --> Scheduler decided 

which worker node create and inform back to API server --> API server will create Pod in worker node.

Practical : 

Step1: Create Ec2 instance Kops Instanace name take instance t3.micro okay, 20 GB disk for master c7.flex.large required,worker nodes 2 t3.micro

Install Kubectl  https://kubernetes.io/docs/tasks/tools/

[root@ip-10-0-0-22 bin]# curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100   138  100   138    0     0   1103      0 --:--:-- --:--:-- --:--:--  1112

100 57.3M  100 57.3M    0     0   101M      0 --:--:-- --:--:-- --:--:--  101M

[root@ip-10-0-0-22 bin]# mv kubectl /usr/local/bin

[root@ip-10-0-0-22 bin]# kubectl version

[root@ip-10-0-2-52 bin]# chmod 777 kubectl
[root@ip-10-0-2-52 bin]# kubectl version
Client Version: v1.33.4
Kustomize Version: v5.6.0
Step2: kops installation steps  Linux directly past to the server 
 
https://kops.sigs.k8s.io/getting_started/install/

[root@ip-10-0-0-22 ~]# curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
[root@ip-10-0-0-22 ~]# chmod +x kops
[root@ip-10-0-0-22 ~]#sudo mv kops /usr/local/bin/kops
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  274M  100  274M    0     0  48.3M      0  0:00:05  0:00:05 --:--:-- 52.1M
[root@ip-10-0-0-22 ~]# kops version
Client version: 1.33.0 (git-v1.33.0)

Step3: Cluster information ,storing in AWS S3 bucket 
[root@ip-10-0-0-22 ~]# aws s3 ls

Unable to locate credentials. You can configure credentials by running "aws configure".

Attach the admin role to the Ec2 machine

Step4: Create one bucket version should be enabled "ccitpublicbucket16"

[root@ip-10-0-0-22 ~]# aws s3 ls
2025-08-16 16:58:08 ccitpublicbucket16

We need to inform kops cluster store the specific bucket store the cluster details 

[root@ip-10-0-0-22 ~]# export KOPS_STATE_STORE=s3://ccitpublicbucket16
[root@ip-10-0-0-22 ~]#

Step5: Cluster creation 

https://kops.sigs.k8s.io/getting_started/aws/
Syntax:-
kops create cluster \ --name=${NAME} \ --cloud=aws \ --zones=us-west-2a \ --discovery-store=s3://prefix-example-com-oidc-store/${NAME}/discovery

Cluster name should be  <anyname>.k8s.local 

Below command ran once it will take (master and woker node will create at the end give some suggestion keep for same)

[root@ip-10-0-0-22 ~]# kops create cluster --name kopsclstr.k8s.local --zones eu-west-2b,eu-west-2a --master-count 1 --master-size c7i-flex.large --master-volume-size 20 --node-count 2 --node-size t3.micro --node-volume-size=15 --image=ami-044415bb13eee2391

Suggestions:
 * list clusters with: kops get cluster
 * edit this cluster with: kops edit cluster kopsclstr.k8s.local
 * edit your node instance group: kops edit ig --name=kopsclstr.k8s.local nodes-eu-west-2b
 * edit your control-plane instance group: kops edit ig --name=kopsclstr.k8s.local control-plane-eu-west-2b

Finally configure your cluster with: kops update cluster --name kopsclstr.k8s.local --yes --admin

Step6: as given suggestion run the command , it will take time to create 

[root@ip-10-0-0-22 ~]#  kops update cluster --name kopsclstr.k8s.local --yes --admin
kOps has set your kubectl context to kopsclstr.k8s.local

Cluster is starting.  It should be ready in a few minutes.

Suggestions:
 * validate cluster: kops validate cluster --wait 10m
 * list nodes: kubectl get nodes --show-labels
 * ssh to a control-plane node: ssh -i ~/.ssh/id_rsa ubuntu@
 * the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate user based on your OS.
 * read about installing addons at: https://kops.sigs.k8s.io/addons.

Successfully created control plane node and two worker nodes 


Target Groups

Load balancer 

Autoscaling 

VPC 

In real time we are set infra using terraform script to eks -- it will take 45 minute 
But Kops it will completed in -5 minutes
Project we will do using terraform 

Step7: it will take some time minimum 10 mints to 15 minutes

[root@ip-10-0-0-22 ~]#  kops validate cluster --wait 10m

[[root@ip-10-0-0-22 ~]#  kubectl get nodes
NAME                  STATUS   ROLES           AGE     VERSION
i-03d052cacf647b877   Ready    node            59s     v1.32.4
i-0e8a3f4c3da2f1696   Ready    node            61s     v1.32.4
i-0f5f236564f4f7804   Ready    control-plane   3m11s   v1.32.4

Step8: We have successfully deploy the pods

I Can’t able to access my application

The reason why we are not able access the application : In Kubernetes, if you want to access the application we have to expose our pod. To expose these pods we use Kubernetes services.

Through Services in Kubernetes ,We export the pods different types
 Services: Cluster-Ip,node port,load balancer , external dns(not using)


Every pod have default one ip,with that ip we cannot able access application,we expose that ip


1. Cluster-IP :  using this cluster-ip service we can export pod, it use only for internal scop internal,external we can able access the application  internet

For ex:- it will provide stable ip or static ip using internal only ,use for database expose pod
using curl, we can test

2.Node Port : it will give you Stable ip  and also give the range node port range(30k-32656)
with this port range number , using worker node public ip access the application  in the pod 



Step8: Se below master node or control-plane and node or worker node are ready 

Server information :kubectl get no  -o also work

[root@ip-10-0-0-22 ~]# kubectl get node  -o wide
NAME                  STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP     OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
i-03d052cacf647b877   Ready    node            10m   v1.32.4   172.20.151.201   3.8.130.16      Ubuntu 24.04.2 LTS   6.8.0-1029-aws   containerd://1.7.28
i-0e8a3f4c3da2f1696   Ready    node            10m   v1.32.4   172.20.72.207    3.10.152.129    Ubuntu 24.04.2 LTS   6.8.0-1029-aws   containerd://1.7.28
i-0f5f236564f4f7804   Ready    control-plane   12m   v1.32.4   172.20.180.173   35.176.113.95   Ubuntu 24.04.2 LTS   6.8.0-1029-aws   containerd://1.7.28

Step9: For pod creation manifest file need prepare 

Docker installation
[root@ip-10-0-0-22 ~]# yum install docker -y && systemctl start docker
[root@ip-10-0-0-22 ~]# vi manifest.yaml
[root@ip-10-0-0-22 ~]# cat manifest.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-1
  labels:
    app: zamota
spec:
  containers:
    - name: container1
      image: shaikmustafa/dm
      ports:
        - containerPort: 80

Step10: it is asking user name and password , i have create own docker image

[root@ip-10-0-0-22 ~]# kubectl create -f manifest.yaml
pod/pod-1 created

Created pod above successfully.

Step11: One pod created successfully 

[root@ip-10-0-0-22 ~]# kubectl get po -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP             NODE                  NOMINATED NODE   READINESS GATES
pod-1   1/1     Running   0          63s   100.96.3.131   i-03d052cacf647b877   <none>           <none>

Step12: How to check, pod details ,as you see scheduler tell ,create particular instance to  create the pod
[root@ip-10-0-0-22 ~]#  kubectl describe pod pod-1

Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  3m15s  default-scheduler  Successfully assigned default/pod-1 to i-03d052cacf647b877
  Normal  Pulling    3m14s  kubelet            Pulling image "shaikmustafa/dm"
  Normal  Pulled     3m9s   kubelet            Successfully pulled image "shaikmustafa/dm" in 4.892s (4.892s including waiting). Image size: 60099640 bytes.
  Normal  Created    3m9s   kubelet            Created container: container1
  Normal  Started    3m9s   kubelet            Started container container1


Step13: Service need to create  ,which service going to expose , go to that selector cluster 
clusterip ,we have given po port 80,we have give same to give port to service port also 80 

[root@ip-10-0-0-22 ~]# kubectl get po --show-labels
NAME    READY   STATUS    RESTARTS   AGE   LABELS
pod-1   1/1     Running   0          53m   app=zamoto

[root@ip-10-0-0-22 ~]# vim service.yaml
[root@ip-10-0-0-22 ~]# vi service.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  type: ClusterIP
  selector:
    app: zamato
  ports:
    - port: 80
      targetPort: 80

[root@ip-10-0-0-22 ~]#  kubectl create -f service.yaml
service/myservice created
Service create kubernetes default one , mysevice we have create our own

[root@ip-10-0-0-22 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   100.64.0.1      <none>        443/TCP   23m
myservice    ClusterIP   100.68.51.206   <none>        80/TCP    10s

Cluster Stable ip 100.68.51.206 we can able access internal only using work/slave public ip

Step14: Worker node 

[root@ip-10-0-0-22 ~]# kubectl label pod pod-1 app=zamato --overwrite
pod/pod-1 labeled
[root@ip-10-0-0-22 ~]# kubectl get endpoints myservice
NAME        ENDPOINTS         AGE
myservice   100.96.3.131:80   21m

Internal working worker/slave node stable link working fine


Step15: service  Command 
[root@ip-10-0-0-22 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   100.64.0.1      <none>        443/TCP   58m
myservice    ClusterIP   100.68.51.206   <none>        80/TCP    35m

Full information of the service 
[root@ip-10-0-0-22 ~]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
kubernetes   ClusterIP   100.64.0.1      <none>        443/TCP   59m   <none>
myservice    ClusterIP   100.68.51.206   <none>        80/TCP    36m   app=zamato
[root@ip-10-0-0-22 ~]#

if you need yaml format 
[root@ip-10-0-0-22 ~]# kubectl get svc -o yaml
apiVersion: v1

Json
[root@ip-10-0-0-22 ~]# kubectl get svc -o json
Full information one service

[root@ip-10-0-0-22 ~]# kubectl describe  svc myservice
Name:                     myservice
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=zamato
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       100.68.51.206
IPs:                      100.68.51.206
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
Endpoints:                100.96.3.131:80
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>

Delete the service 

[root@ip-10-0-0-22 ~]# kubectl delete  svc myservice
service "myservice" deleted
[root@ip-10-0-0-22 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   100.64.0.1   <none>        443/TCP   63m

 Node Port 

It will provide you 
stable ip  --Internal use 
Node Port --Internal and external through internet (slave server using public ip along with node port -IP)
node port range : 30k -32656

Step1: Existing port just change the cluster IP  to node Ip  in servie.yaml file 




[root@ip-10-0-0-22 ~]# vi service.yaml
[root@ip-10-0-0-22 ~]# cat service.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  type: NodePort
  selector:
    app: zamato
  ports:
    - port: 80
      targetPort: 80
[root@ip-10-0-0-22 ~]# kubectl create -f service.yaml
service/myservice created
[root@ip-10-0-0-22 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   100.64.0.1     <none>        443/TCP        69m
myservice    NodePort    100.71.48.35   <none>        80:31048/TCP   21s


Step2: 
Stable ip try access internally 100.71.48.35

Step3: worker/slave node public ip 
curl 3.10.152.129:31048
Step4:
Try to open internet browser , if not able access open the port in security group  31048
Worker node 1


Worker node 2 with public ip



Step5: Real time we are not use ip address, Load balancer service 

[root@ip-10-0-0-22 ~]# kubectl delete  svc myservice
service "myservice" deleted

Load balance :
Work it work load balancer in kubernetes
K8s svc -->Cloude provide -->LB (DNS) name wil provide -- >cctter.com 

It will give these use case 
Stable ip 
Node port 
DNS 

[root@ip-10-0-0-22 ~]# vi service.yaml
[root@ip-10-0-0-22 ~]# cat service.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  type: LoadBalancer
  selector:
    app: zamato
  ports:
    - port: 80
      targetPort: 80

[root@ip-10-0-0-22 ~]# kubectl create -f service.yaml
service/myservice created
[root@ip-10-0-0-22 ~]# kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)        AGE
kubernetes   ClusterIP      100.64.0.1       <none>                                                                   443/TCP        86m
myservice    LoadBalancer   100.68.221.171   a7655f520a3d44cd98e3fdff6def1af1-351032056.eu-west-2.elb.amazonaws.com   80:32627/TCP   21s

Step6: checking with stable IP

ubuntu@i-03d052cacf647b877:~$ curl 100.68.221.171
<!doctype html>
<html lang="en">

Step7: Able to access application node port 

Step8: Need to with loadbalancer/dns name  , it will not work immediately ,it will take time to 
1 minute to comunicate

a7655f520a3d44cd98e3fdff6def1af1-351032056.eu-west-2.elb.amazonaws.com 


Now load will transfer to two workers not equally.


Step 9: Need to delete the Cluster after completion using command only not manually , if you are manually delete ,autoscaling trying create parallel ,so do command

Below delete the cluster give this command
[root@ip-10-0-0-22 ~]#export KOPS_STATE_STORE=s3://ccitpublicbucket16

Out cluster information will store in this s3 bucket ccitpublicbucket16

Step10:

 API endpoint and services:
[root@ip-10-0-0-22 ~]# kubectl cluster-info
Kubernetes control plane is running at https://api-kopsclstr-k8s-local-0kba4s-37d4891bf709009e.elb.eu-west-2.amazonaws.com
CoreDNS is running at https://api-kopsclstr-k8s-local-0kba4s-37d4891bf709009e.elb.eu-west-2.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
 cluster state and configuration:
[root@ip-10-0-0-22 ~]# kops get cluster
NAME                    CLOUD   ZONES
kopsclstr.k8s.local     aws     eu-west-2a,eu-west-2

node health:
[root@ip-10-0-0-22 ~]# kubectl get nodes
NAME                  STATUS   ROLES           AGE    VERSION
i-03d052cacf647b877   Ready    node            103m   v1.32.4
i-0e8a3f4c3da2f1696   Ready    node            103m   v1.32.4
i-0f5f236564f4f7804   Ready    control-plane   105m   v1.32.4
system components:
[root@ip-10-0-0-22 ~]# kubectl get pods -n kube-system

[root@ip-10-0-0-22 ~]# kops delete cluster --name kopsclstr.k8s.local --yes


subnet:subnet-014cdb83033135899 ok
security-group:sg-0c7c7e35635a07319     ok
route-table:rtb-0d6c56defcbad4db3       ok
vpc:vpc-08692f993cc46837f       ok
dhcp-options:dopt-03b1723c1a058a166     ok
Deleted kubectl config for kopsclstr.k8s.local

Deleted cluster: "kopsclstr.k8s.local"


--All will create , Autoscaling,instance,loadbalancer,


KOPS Server only you need manually delete.

Reference document:
https://mustafa-k8s.hashnode.dev/kops-kubernetes-operations-the-ultimate-guide-for-devops-engineers


--Thanks


No comments:

Post a Comment