Friday, August 22, 2025

Kubernetes part9

Kubernetes part9

Class 94 Kubernetes Part8 August 22nd

Resource Quota: Resource quota is on of the rich features of Kubernetes that helps to manage and distribute resources according to the requirements.

Just assume situation We have 2 teams( team-A,team-B) which are working on single Kubernetes cluster
Team -A need more CPUS & memory because of heavy workload tasks. Team-B pod are not created with resource quota,
whenever team-A application resource high usage trying create one more pod, if space is less delete the team-B pod and occupied the space, because team-B  space not allocated to resource quota.for the reason every pod must resource quota is important

There are tow type of restrictions that need to be mentioned while using resource quota

Limit: Represent the maximum amount of CPUS or memory the pod can use 

(limit: 4 cpus & 4 gb ram)

Request: Represent the minimum amount of CPU or memory that the pod is guaranteed to have

(Pod creation minimum requirement 1cpu & 1 gb ram)

Practical:

Step1: minikube already setup done ec2-instance amazon Linux 

[root@ip-10-0-0-29 ~]# systemctl start docker

[root@ip-10-0-0-29 ~]# minikube start --driver=docker --force

[root@ip-10-0-0-29 ~]# minikube status

minikube

type: Control Plane

host: Running

kubelet: Running

apiserver: Running

kubeconfig: Configured

Step2:Create namespace and going to inside the namespace 

[root@ip-10-0-0-29 ~]# kubectl create ns dev

namespace/dev created

[root@ip-10-0-0-29 ~]# kubectl config  set-context --current --namespace=dev

Context "minikube" modified.

Step3: As you see here  Create resource 

[root@ip-10-0-0-29 ~]# vi resourcequote.yaml

[root@ip-10-0-0-29 ~]# cat resourcequote.yaml

apiVersion: v1

kind: Pod

metadata:

  name: team-a

spec:

  containers:

    - name: container1

      image: nginx

      ports:

        - containerPort: 80

      resources:

        requests:

          memory: "10Mi"

          cpu: "100m"

        limits:

          memory: "20Mi"

          cpu: "200m"

[root@ip-10-0-0-29 ~]# kubectl create -f resourcequote.yaml

pod/team-a created

Step3: As see below namespace is dev and for the pod  created with 100mi memory 10 mi

[root@ip-10-0-0-29 ~]# kubectl describe po team-a
Name:             team-a
Namespace:        dev
Priority:         0
Service Account:  default
Node:             minikube/192.168.49.2
Start Time:       Mon, 25 Aug 2025 15:22:56 +0000
Labels:           <none>
Annotations:      <none>
Status:           Running
IP:               10.244.0.38

 Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  20Mi
    Requests:
      cpu:        100m
      memory:     10Mi
    Environment:  <none>

Step4: Now trying to create new pod without request options ,give limit  observer what limit you have given automatically it was taken request also same size.
 
[root@ip-10-0-0-29 ~]# cat resourcequote.yaml
apiVersion: v1
kind: Pod
metadata:
  name: team-b
spec:
  containers:
    - name: container1
      image: nginx
      ports:
        - containerPort: 80
      resources:
        limits:
          memory: "20Mi"
          cpu: "200m"
[root@ip-10-0-0-29 ~]# kubectl create -f resourcequote.yaml
pod/team-b created
[root@ip-10-0-0-29 ~]# kubectl describe po team-b
Name:             team-b
Namespace:        dev
Priority:         0
Service Account:  default
Node:             minikube/192.168.49.2

    Limits:
      cpu:     200m
      memory:  20Mi
    Requests:
      cpu:        200m
      memory:     20Mi

Step5: Now we plan to putting request and removed limits 
[root@ip-10-0-0-29 ~]# cat resourcequote.yaml

apiVersion: v1

kind: Pod

metadata:

  name: team-c

spec:

  containers:

    - name: container1

      image: nginx

      ports:

        - containerPort: 80

      resources:

         requests:

          memory: "10Mi"

          cpu: "100m"

[root@ip-10-0-0-29 ~]# kubectl create -f resourcequote.yaml

pod/team-c created

As see below request given it will take request only , no limit that mean it is unlimited , it is not good practice 

[root@ip-10-0-0-29 ~]# kubectl describe po team-c

Name:             team-c

Namespace:        dev

Priority:         0

Service Account:  default

Node:             minikube/192.168.49.2


 Requests:

      cpu:        100m

      memory:     10Mi

    Environment:  <none>

    Mounts:

Note: Three case above 
1. Both request and limit --it as follow limit
2. Limit  limit=request
3. Request request=unlimited
[root@ip-10-0-0-29 ~]# kubectl delete po --all
pod "team-a" deleted
pod "team-b" deleted
pod "team-c" deleted

Realtime 

Step1:
[root@ip-10-0-0-29 ~]# cat realtime.yaml
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: myreq
spec:
  hard:
    requests.cpu: "258m"
    requests.memory: "500Mi"
    limits.cpu: "1000m"
    limits.memory: "1500Mi"

[root@ip-10-0-0-29 ~]# kubectl create -f realtime.yaml
resourcequota/myreq created

We have added the resource to dev tablespace 

[root@ip-10-0-0-29 ~]# kubectl get ns

NAME              STATUS   AGE

default           Active   4d

dev               Active   3h58m

As see below showing how many cpu 1, you able create memory 

[root@ip-10-0-0-29 ~]# kubectl describe ns dev

Name:         dev

Labels:       kubernetes.io/metadata.name=dev

Annotations:  <none>

Status:       Active

Resource Quotas

  Name:            myreq

  Resource         Used  Hard

  --------         ---   ---

  limits.cpu       0     1

  limits.memory    0     1500Mi

  requests.cpu     0     258m

  requests.memory  0     500Mi

No LimitRange resource.


Step6:Lets create the pod in dev namspace , you must specified the limit other wise pod will not create 

[root@ip-10-0-0-29 ~]# cat pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: team-a
spec:
  containers:
    - name: container1
      image: nginx
      ports:
        - containerPort: 80
      resources:
        requests:
          memory: "10Mi"
          cpu: "100m"
        limits:
          memory: "20Mi"
          cpu: "200m"

--Now namespace Resource quota used dev namespace

Resource Quotas
  Name:            myreq
  Resource         Used  Hard
  --------         ---   ---
  limits.cpu       200m  1
  limits.memory    20Mi  1500Mi
  requests.cpu     100m  258m
  requests.memory  10Mi  500Mi

Step7:now trying to create without resource, see getting must be specified request and limit need to give
[root@ip-10-0-0-29 ~]# kubectl create -f pod.yml
Error from server (Forbidden): error when creating "pod.yml": pods "team-b" is forbidden: failed quota: myreq: must specify limits.cpu for: container1; limits.memory for: container1; requests.cpu for: container1; requests.memory for: container1
[root@ip-10-0-0-29 ~]#

Step8: Created one more pod  team-b CPU 58m left , let us create one more pod 
[root@ip-10-0-0-29 ~]# vi pod.yml
[root@ip-10-0-0-29 ~]# kubectl create -f pod.yml
pod/team-b created
[root@ip-10-0-0-29 ~]# kubectl get pod
NAME     READY   STATUS    RESTARTS   AGE
team-a   1/1     Running   0          7m52s
team-b   1/1     Running   0          15s
[root@ip-10-0-0-29 ~]# kubectl describe ns dev
Name:         dev
Labels:       kubernetes.io/metadata.name=dev
Annotations:  <none>
Status:       Active
Resource Quotas
  Name:            myreq
  Resource         Used  Hard
  --------         ---   ---
  limits.cpu       400m  1
  limits.memory    40Mi  1500Mi
  requests.cpu     200m  258m
  requests.memory  20Mi  500Mi

Step9: Creating more pod ,getting quora exceeded, now increase the request quota

[root@ip-10-0-0-29 ~]# vi pod.yml
[root@ip-10-0-0-29 ~]# kubectl create -f pod.yml
Error from server (Forbidden): error when creating "pod.yml": pods "team-c" is forbidden: exceeded quota: myreq, requested: requests.cpu=100m, used: requests.cpu=200m, limited: requests.cpu=258m
[root@ip-10-0-0-29 ~]#

                                                    Increase the request quota
Step1:increasing cpu 258 to 400m
[root@ip-10-0-0-29 ~]# cat realtime.yaml
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: myreq
spec:
  hard:
    requests.cpu: "400m"
    requests.memory: "500Mi"
    limits.cpu: "1000m"
    limits.memory: "1500Mi"

[root@ip-10-0-0-29 ~]# kubectl apply -f realtime.yaml

Warning: resource resourcequotas/myreq is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.

resourcequota/myreq configured

--See below resource quota increased

[root@ip-10-0-0-29 ~]# kubectl describe ns dev
Name:         dev
Labels:       kubernetes.io/metadata.name=dev
Annotations:  <none>
Status:       Active

Resource Quotas
  Name:            myreq
  Resource         Used  Hard
  --------         ---   ---
  limits.cpu       400m  1
  limits.memory    40Mi  1500Mi
  requests.cpu     200m  400m
  requests.memory  20Mi  500Mi

[root@ip-10-0-0-29 ~]# kubectl create -f pod.yml
pod/team-c created
[root@ip-10-0-0-29 ~]# kubectl get po
NAME     READY   STATUS    RESTARTS   AGE
team-a   1/1     Running   0          19m
team-b   1/1     Running   0          11m
team-c   1/1     Running   0          18s

--Deleted all pod Resourcequota name dev quota released 
[root@ip-10-0-0-29 ~]# kubectl delete pod --all
pod "team-a" deleted
pod "team-b" deleted
pod "team-c" deleted
[root@ip-10-0-0-29 ~]# kubectl describe ns dev
Name:         dev
Labels:       kubernetes.io/metadata.name=dev
Annotations:  <none>
Status:       Active

Resource Quotas
  Name:            myreq
  Resource         Used  Hard
  --------         ---   ---
  limits.cpu       0     1
  limits.memory    0     1500Mi
  requests.cpu     0     400m
  requests.memory  0     500Mi

--This is my resource quota 
[root@ip-10-0-0-29 ~]# kubectl get ResourceQuota
NAME    REQUEST                                          LIMIT                                      AGE
myreq   requests.cpu: 0/400m, requests.memory: 0/500Mi   limits.cpu: 0/1, limits.memory: 0/1500Mi   30m

PROBES 
Container and container health checks purpose we are using probs 

PROBES: used to determine the health and readiness of containers running within pods. Probes
are 3 types:
1.Readiness probes are used to indicate when a container is ready to receive traffic.
2.Liveness probes are used to determine whether a container is still running and responding to
requests.
3.Startup Probe are used to determines whether the application within the container has
started successfully. It's used to delay the liveness and readiness probes until the application
is ready to handle traffic

WHY PROBES?
  • In K8s, its common to scale up and scale down the pods. But when the pod is created newly it
  • will take some time to start the container and run the applications.
  • If a pod is not ready to receive traffic, it may receive requests that it cannot handle, that
  • causes downtime for our application.
  • Similarly, if a container is not running correctly, it may not be able to respond to requests,
  • resulting in the pod being terminated and replaced with a new one.
  • To overcome this issues, we are using Probes.
TYPES OF PROBES:    
There are several types of probes that can be used in Kubernetes which includes HTTP, TCP, and
command probes

Practical:
Real time need to check application running or not using curl  5 sec wait for 30 sec, it application still not responding, container will recreate.
Step1:

[root@ip-10-0-0-29 ~]# cat pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: team-a
spec:
  containers:
    - name: container1
      image: nginx
      ports:
        - containerPort: 80
      args:
        - /bin/bash
        - -c
        - touch /tmp/ccit; sleep 10000
      livenessProbe:
        exec:
          command:
            - cat
            - /tmp/ccit
        initialDelaySeconds: 5
        periodSeconds: 5
        timeoutSeconds: 30
      resources:
        requests:
          memory: "10Mi"
          cpu: "100m"
        limits:
          memory: "20Mi"
          cpu: "200m"
[root@ip-10-0-0-29 ~]# kubectl create -f pod.yml
pod/team-a created

--See here Livness probe failed after 31 second create new container 
[root@ip-10-0-0-29 ~]# kubectl describe pod team-a
Name:             team-a
Namespace:        dev
Priority:         0
Service Account:  default
Node:             minikube/192.168.49.2
 bytes.
  Warning  Unhealthy  76s (x3 over 86s)    kubelet            Liveness probe failed: cat: /tmp/ccit: No such file or directory
  Normal   Killing    76s                  kubelet            Container container1 failed liveness probe, will be restarted
  Normal   Pulling    46s (x2 over 3m56s)  kubelet            Pulling image "nginx"
  Normal   Created    45s (x2 over 3m55s)  kubelet            Created container: container1
  Normal   Started    45s (x2 over 3m55s)  kubelet            Started container container1
  Normal   Pulled     45s                  kubelet            Successfully pulled image "nginx" in 795ms (795ms inc

Step2: Delete ccit file one more time ,1 second pull the image 0 second created one more container
every 5 second it will send the request if 30 seconds not respond ,create new container.

 Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  9m41s                default-scheduler  Successfully assigned dev/team-a to minikube
  Normal   Pulled     9m40s                kubelet            Successfully pulled image "nginx" in 786ms (786ms including waiting). Image size: 192385800 bytes.
  Normal   Pulled     6m30s                kubelet            Successfully pulled image "nginx" in 795ms (795ms including waiting). Image size: 192385800 bytes.
  Warning  Unhealthy  31s (x6 over 7m11s)  kubelet            Liveness probe failed: cat: /tmp/ccit: No such file or directory
  Normal   Killing    31s (x2 over 7m1s)   kubelet            Container container1 failed liveness probe, will be restarted
  Normal   Pulling    1s (x3 over 9m41s)   kubelet            Pulling image "nginx"
  Normal   Created    0s (x3 over 9m40s)   kubelet            Created container: container1
  Normal   Started    0s (x3 over 9m40s)   kubelet            Started container container1
  Normal   Pulled     0s                   kubelet            Successfully pulled image "nginx" in 798ms (798ms including waiting). Image size: 192385800 bytes.

So far deployment these are completed.
Deploy
   pods 
     containers 
   name,images.ports
   resources,volume,cm & sec & probes


--Thanks 

No comments:

Post a Comment