Saturday, August 23, 2025

Kubernetes part10

Kubernetes part10

Class 95 Kubernetes Part8 August 23rd

Kubernetes cluster usually different people has access (developer/tester/deployment user..etc)
for them how to give the access is call Rollback access concept.

Kubernetes admin need to decided give the permissions which person to give which permission

Rbac(Rollback access):  Iam a developer I want to see the pods, another developer he need delete the pods ,one more Devops guy he need create, delete the pod

Role-Based Access Control (RBAC) is a critical security feature in Kubernetes that allows you to define and manage access to resources based on roles and permissions. RBAC ensures that only authorized users, processes, or services can interact with specific resources within a Kubernetes cluster
For ex:-
Role( Dev ,Tester,Devops(PODS view watch,create,delete, deploy: create,delete, Svc: all)
First we need create the Role create -->then permission--> then attached to developer

Functionality How it will work:

Kubernetes resources(PODS,Deploy,RS,SVC,VOL,CM,SEC) permisson(view,watch,create,delete)
here we attached to user.
IAM (EC2,S3,RDS,VPC) permission(Read,write,delete) for all them we create one policy attached to user attached.

In Kubernetes, there are two types of accounts
1.User Account
2.Service Account
User account: User accounts are used to log into a Kubernetes cluster and manipulate resources therein. Each user account is associated with a unique set of credentials, which are used to authenticate the service’s requests.
Service Account: Kubernetes Service Accounts are specialized accounts used by applications and services running on Kubernetes to interact with the Kubernetes API.

Practical : 
in Kubernetes we can not directly create the user , below multiple way to create the user 
  • client certificates 
  • bearer tokens 
  • authenticating proxy 
  • HTTP basic auth.
I will choose client certificate to create a user which is very easy to create. This certificates are used to create users. When a user perform any command like kubectl get po then K8's API will authenticate and authorize the request.

Step1: Generate the certificate, key just like sshkeygen type of the key 

lets create a folder: subbu
Generate a key using openssl : openssl genrsa -out subbu.key 2048
Generate a Client Sign Request (CSR) : openssl req -new -key subbu.key -out
subbu.csr -subj "/CN=subbu/O=group1"

Generate the certificate (CRT): openssl x509 -req -in subbu.csr -CA
~/.minikube/ca.crt -CAkey ~/.minikube/ca.key -CAcreateserial -out subbu.crt -days
500

steps to create user.
lets create a user: kubectl config set-credentials subbu --client-certificate=subbu.crt --client-key=subbu.key

[root@ip-10-0-0-29 subbu]# openssl genrsa -out subbu.key 2048
[root@ip-10-0-0-29 subbu]# ls
subbu.key

[root@ip-10-0-0-29 subbu]# openssl req -new -key subbu.key -out subbu.csr -subj "/CN=subbu/O=group1"
[root@ip-10-0-0-29 subbu]# ls -lrt
total 8
-rw-------. 1 root root 1704 Aug 26 14:36 subbu.key
-rw-r--r--. 1 root root  907 Aug 26 14:38 subbu.csr

[root@ip-10-0-0-29 subbu]# openssl x509 -req -in subbu.csr -CA ~/.minikube/ca.crt -CAkey ~/.minikube/ca.key -CAcreateserial -out subbu.crt -days 500
Certificate request self-signature ok
subject=CN=subbu, O=group1

[root@ip-10-0-0-29 subbu]# kubectl config set-credentials subbu --client-certificate=subbu.crt --client-key=subbu.key
User "subbu" set.

Step2: user create successfully, one user to another user to switch we are using context 
--Below we get know which user your in current-context minikube and list of users 
[root@ip-10-0-0-29 subbu]# kubectl config view
current-context: minikube

users:
- name: minikube
  user:
    client-certificate: /root/.minikube/profiles/minikube/client.crt
    client-key: /root/.minikube/profiles/minikube/client.key
- name: subbu
  user:
    client-certificate: /root/subbu/subbu.crt
    client-key: /root/subbu/subbu.key

Step3: one user to another user to switch required context , As you see above minikube ,has context, subbu user no context ,we need create context for our user

Create context with name my-context
Set a context entry in kubeconfig: kubectl config set-context my-context --cluster=minikube --user=subbu

[root@ip-10-0-0-29 subbu]# kubectl config set-context my-context --cluster=minikube --user=subbu
Context "my-context" created.
[root@ip-10-0-0-29 subbu]# kubectl config view
contexts:

- context:

    cluster: minikube

     name: context_info

   user: minikube

  name: minikube

- context:

    cluster: minikube

    user: subbu

  name: my-context

Step4: Switch the context 
Switch to devops user: kubectl config use-context my-context 
[root@ip-10-0-0-29 subbu]# kubectl config use-context my-context
Switched to context "my-context".
--Now you see we are in my-context 
[root@ip-10-0-0-29 subbu]# kubectl config view
current-context: my-context
--we are not give permission just create the user 
[root@ip-10-0-0-29 subbu]# kubectl get po
Error from server (Forbidden): pods is forbidden: User "subbu" cannot list resource "pods" in API group "" in the namespace "default"

There are four components to RBAC in Kubernetes: 
1.roles 2.Cluster roles 3.role bindings 4.ClusterRolesBinding

We have roles (pods,svc,deploy,create,delete,watch) ,these role attached to user called as rolebinding
Cluster also same roles(pods,svc,deploy,create,delete,watch) ,attached to  user called as cluster role binding

If you are giving specific namespace is called role binding .
If you are giving all namespace to give permission is called cluster binding 

Permission to give role need to create:

Create specific namespace 
Step1:
[root@ip-10-0-0-29 subbu]# kubectl create ns dev
namespace/dev created

You will get the different version of the resource

[root@ip-10-0-0-29 subbu]# kubectl api-resources

Create the role
[root@ip-10-0-0-29 subbu]# vi role.yaml
[root@ip-10-0-0-29 subbu]# cat role.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: dev-role
  namespace: dev
rules:
  - apiGroups: ["*"]
    resources: ["pods","service"]
    verbs: ["get","list","create"]
[root@ip-10-0-0-29 subbu]# kubectl create -f role.yaml
role.rbac.authorization.k8s.io/dev-role created

Step2: Need to check dev namespace roles create or not 
[root@ip-10-0-0-29 subbu]# kubectl get roles -n dev
NAME       CREATED AT
dev-role   2025-08-26T15:23:56Z
Step3: Role binding need attached the above role to the user subbu
[root@ip-10-0-0-29 subbu]# cat binding.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: dev-role-binding
  namespace: dev
subjects:
  - kind: User
    name: subbu
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: dev-role
  apiGroup: rbac.authorization.k8s.io

[root@ip-10-0-0-29 subbu]# kubectl create -f binding.yaml
rolebinding.rbac.authorization.k8s.io/dev-role-binding created

Step3: Attached role to user that is called rolebase, now the subbu user has ,able create the pod 
let switch user using context 
[root@ip-10-0-0-29 subbu]# kubectl config use-context my-context
Switched to context "my-context".
[root@ip-10-0-0-29 subbu]# kubect config view
current-context: my-context
Get error we have give permission to dev tablespace not default tablespace 
[root@ip-10-0-0-29 subbu]# kubectl get po
Error from server (Forbidden): pods is forbidden: User "subbu" cannot list resource "pods" in API group "" in the namespace "default"
--See here permission exists 
[root@ip-10-0-0-29 subbu]# kubectl get po -n dev
No resources found in dev namespace.
Step4: Lets create the pod ,Successfully create the pod ,able list also 
[root@ip-10-0-0-29 subbu]# kubectl run pod-1 --image=nginx -n dev
pod/pod-1 created
[root@ip-10-0-0-29 subbu]# kubectl get pod -n dev
NAME    READY   STATUS    RESTARTS   AGE
pod-1   1/1     Running   0          14s

Step5: We have not given delete permission to subbu lets try once .
[root@ip-10-0-0-29 subbu]# kubectl delete pod pod-1 -n dev
Error from server (Forbidden): pods "pod-1" is forbidden: User "subbu" cannot delete resource "pods" in API group "" in the namespace "dev"

Getting error subbu use not having permssion ,let us give permission to subbu, switch minikube 
and update the role 

Step6:
[root@ip-10-0-0-29 subbu]# kubectl config use-context minikube
Switched to context "minikube".
current-context: minikube
[root@ip-10-0-0-29 subbu]# cat role.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: dev-role
  namespace: dev
rules:
  - apiGroups: ["*"]
    resources: ["pods","service"]
    verbs: ["get","list","create","delete","watch"]

Step7: apply the role
After applied below command 
[root@ip-10-0-0-29 subbu]# kubectl apply -f role.yaml
Warning: resource roles/dev-role is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
role.rbac.authorization.k8s.io/dev-role configured
[root@ip-10-0-0-29 subbu]# kubectl get pod -n dev
NAME    READY   STATUS    RESTARTS   AGE
pod-1   1/1     Running   0          61s
[root@ip-10-0-0-29 subbu]# kubectl delete po pod-1 -n dev
pod "pod-1" deleted
 
Cluster role and Cluster binding 
Step1: Delete the role first dev
[root@ip-10-0-0-29 subbu]# kubectl delete role --all -n dev
role.rbac.authorization.k8s.io "dev-role" deleted
Step2:Delete the rolebinding 
[root@ip-10-0-0-29 subbu]# kubectl delete rolebinding --all -n dev
rolebinding.rbac.authorization.k8s.io "dev-role-binding" deleted

Step3: now attach the cluster role and cluster binding attach to subbu user 
[root@ip-10-0-0-29 subbu]# cat clusterrole.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: devops-role
rules:
  - apiGroups: ["*"]
    resources: ["pods","service"]
    verbs: ["get","list","create","delete","watch"]

[root@ip-10-0-0-29 subbu]# kubectl create -f clusterrole.yaml
clusterrole.rbac.authorization.k8s.io/devops-role created

[root@ip-10-0-0-29 subbu]# cat clusterbinding.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: devops-role-binding
subjects:
  - kind: User
    name: subbu
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: devops-role
  apiGroup: rbac.authorization.k8s.io

[root@ip-10-0-0-29 subbu]# kubectl create -f clusterbinding.yaml
clusterrolebinding.rbac.authorization.k8s.io/devops-role-binding created

Now the user has permission to all namespace 
Step4: switched to my-context subbu user 
[root@ip-10-0-0-29 subbu]# kubectl config use-context my-context
Switched to context "my-context".

-- See here even default namespace pod also coming 
[root@ip-10-0-0-29 subbu]# kubectl get po
NAME        READY   STATUS                       RESTARTS       AGE
pod-1       0/1     CreateContainerConfigError   0              5d
pod-2       0/1     CreateContainerConfigError   0              5d
pod-3       1/1     Running                      7 (4h9m ago)   4d23h
pod-4       1/1     Running                      6 (4h9m ago)   4d9h
test-curl   0/1     ImagePullBackOff             0              4d6h

-- See here we can delete the pod's also coming 

[root@ip-10-0-0-29 subbu]# kubectl delete pod pod-1
pod "pod-1" deleted
[root@ip-10-0-0-29 subbu]# kubectl delete pod pod-2
pod "pod-2" deleted

-- See here we can create the pod also coming 

[root@ip-10-0-0-29 subbu]# kubectl run pod-1 --image=nginx
pod/pod-1 created

[root@ip-10-0-0-29 subbu]# kubectl get po
NAME        READY   STATUS             RESTARTS        AGE
pod-1       1/1     Running            0               10s
pod-3       1/1     Running            7 (4h11m ago)   4d23h
pod-4       1/1     Running            6 (4h11m ago)   4d9h
test-curl   0/1     ImagePullBackOff   0               4d6h

Step5: Created one dev namespace one pod , let create one namespace 

[root@ip-10-0-0-29 subbu]# kubectl run pod-1 --image=nginx -n dev
pod/pod-1 created

Step6:
[root@ip-10-0-0-29 subbu]# kubectl config use-context minikube
Switched to context "minikube".
--Created one namespace prod
[root@ip-10-0-0-29 subbu]# kubectl create ns prod
namespace/prod created

--Created one pod new namespace prod

[root@ip-10-0-0-29 subbu]# kubectl run pod-2 --image=ubuntu -n prod
pod/pod-2 created

--See here all namespace able to access pod, able create,delete all tablespace

[root@ip-10-0-0-29 subbu]# kubectl get po -n prod
NAME    READY   STATUS             RESTARTS      AGE
pod-2   0/1     CrashLoopBackOff   4 (17s ago)   119s
[root@ip-10-0-0-29 subbu]# kubectl get po -n dev
NAME    READY   STATUS    RESTARTS   AGE
pod-1   1/1     Running   0          9m23s



--Thanks 


Friday, August 22, 2025

Kubernetes part9

Kubernetes part9

Class 94 Kubernetes Part8 August 22nd

Resource Quota: Resource quota is on of the rich features of Kubernetes that helps to manage and distribute resources according to the requirements.

Just assume situation We have 2 teams( team-A,team-B) which are working on single Kubernetes cluster
Team -A need more CPUS & memory because of heavy workload tasks. Team-B pod are not created with resource quota,
whenever team-A application resource high usage trying create one more pod, if space is less delete the team-B pod and occupied the space, because team-B  space not allocated to resource quota.for the reason every pod must resource quota is important

There are tow type of restrictions that need to be mentioned while using resource quota

Limit: Represent the maximum amount of CPUS or memory the pod can use 

(limit: 4 cpus & 4 gb ram)

Request: Represent the minimum amount of CPU or memory that the pod is guaranteed to have

(Pod creation minimum requirement 1cpu & 1 gb ram)

Practical:

Step1: minikube already setup done ec2-instance amazon Linux 

[root@ip-10-0-0-29 ~]# systemctl start docker

[root@ip-10-0-0-29 ~]# minikube start --driver=docker --force

[root@ip-10-0-0-29 ~]# minikube status

minikube

type: Control Plane

host: Running

kubelet: Running

apiserver: Running

kubeconfig: Configured

Step2:Create namespace and going to inside the namespace 

[root@ip-10-0-0-29 ~]# kubectl create ns dev

namespace/dev created

[root@ip-10-0-0-29 ~]# kubectl config  set-context --current --namespace=dev

Context "minikube" modified.

Step3: As you see here  Create resource 

[root@ip-10-0-0-29 ~]# vi resourcequote.yaml

[root@ip-10-0-0-29 ~]# cat resourcequote.yaml

apiVersion: v1

kind: Pod

metadata:

  name: team-a

spec:

  containers:

    - name: container1

      image: nginx

      ports:

        - containerPort: 80

      resources:

        requests:

          memory: "10Mi"

          cpu: "100m"

        limits:

          memory: "20Mi"

          cpu: "200m"

[root@ip-10-0-0-29 ~]# kubectl create -f resourcequote.yaml

pod/team-a created

Step3: As see below namespace is dev and for the pod  created with 100mi memory 10 mi

[root@ip-10-0-0-29 ~]# kubectl describe po team-a
Name:             team-a
Namespace:        dev
Priority:         0
Service Account:  default
Node:             minikube/192.168.49.2
Start Time:       Mon, 25 Aug 2025 15:22:56 +0000
Labels:           <none>
Annotations:      <none>
Status:           Running
IP:               10.244.0.38

 Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  20Mi
    Requests:
      cpu:        100m
      memory:     10Mi
    Environment:  <none>

Step4: Now trying to create new pod without request options ,give limit  observer what limit you have given automatically it was taken request also same size.
 
[root@ip-10-0-0-29 ~]# cat resourcequote.yaml
apiVersion: v1
kind: Pod
metadata:
  name: team-b
spec:
  containers:
    - name: container1
      image: nginx
      ports:
        - containerPort: 80
      resources:
        limits:
          memory: "20Mi"
          cpu: "200m"
[root@ip-10-0-0-29 ~]# kubectl create -f resourcequote.yaml
pod/team-b created
[root@ip-10-0-0-29 ~]# kubectl describe po team-b
Name:             team-b
Namespace:        dev
Priority:         0
Service Account:  default
Node:             minikube/192.168.49.2

    Limits:
      cpu:     200m
      memory:  20Mi
    Requests:
      cpu:        200m
      memory:     20Mi

Step5: Now we plan to putting request and removed limits 
[root@ip-10-0-0-29 ~]# cat resourcequote.yaml

apiVersion: v1

kind: Pod

metadata:

  name: team-c

spec:

  containers:

    - name: container1

      image: nginx

      ports:

        - containerPort: 80

      resources:

         requests:

          memory: "10Mi"

          cpu: "100m"

[root@ip-10-0-0-29 ~]# kubectl create -f resourcequote.yaml

pod/team-c created

As see below request given it will take request only , no limit that mean it is unlimited , it is not good practice 

[root@ip-10-0-0-29 ~]# kubectl describe po team-c

Name:             team-c

Namespace:        dev

Priority:         0

Service Account:  default

Node:             minikube/192.168.49.2


 Requests:

      cpu:        100m

      memory:     10Mi

    Environment:  <none>

    Mounts:

Note: Three case above 
1. Both request and limit --it as follow limit
2. Limit  limit=request
3. Request request=unlimited
[root@ip-10-0-0-29 ~]# kubectl delete po --all
pod "team-a" deleted
pod "team-b" deleted
pod "team-c" deleted

Realtime 

Step1:
[root@ip-10-0-0-29 ~]# cat realtime.yaml
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: myreq
spec:
  hard:
    requests.cpu: "258m"
    requests.memory: "500Mi"
    limits.cpu: "1000m"
    limits.memory: "1500Mi"

[root@ip-10-0-0-29 ~]# kubectl create -f realtime.yaml
resourcequota/myreq created

We have added the resource to dev tablespace 

[root@ip-10-0-0-29 ~]# kubectl get ns

NAME              STATUS   AGE

default           Active   4d

dev               Active   3h58m

As see below showing how many cpu 1, you able create memory 

[root@ip-10-0-0-29 ~]# kubectl describe ns dev

Name:         dev

Labels:       kubernetes.io/metadata.name=dev

Annotations:  <none>

Status:       Active

Resource Quotas

  Name:            myreq

  Resource         Used  Hard

  --------         ---   ---

  limits.cpu       0     1

  limits.memory    0     1500Mi

  requests.cpu     0     258m

  requests.memory  0     500Mi

No LimitRange resource.


Step6:Lets create the pod in dev namspace , you must specified the limit other wise pod will not create 

[root@ip-10-0-0-29 ~]# cat pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: team-a
spec:
  containers:
    - name: container1
      image: nginx
      ports:
        - containerPort: 80
      resources:
        requests:
          memory: "10Mi"
          cpu: "100m"
        limits:
          memory: "20Mi"
          cpu: "200m"

--Now namespace Resource quota used dev namespace

Resource Quotas
  Name:            myreq
  Resource         Used  Hard
  --------         ---   ---
  limits.cpu       200m  1
  limits.memory    20Mi  1500Mi
  requests.cpu     100m  258m
  requests.memory  10Mi  500Mi

Step7:now trying to create without resource, see getting must be specified request and limit need to give
[root@ip-10-0-0-29 ~]# kubectl create -f pod.yml
Error from server (Forbidden): error when creating "pod.yml": pods "team-b" is forbidden: failed quota: myreq: must specify limits.cpu for: container1; limits.memory for: container1; requests.cpu for: container1; requests.memory for: container1
[root@ip-10-0-0-29 ~]#

Step8: Created one more pod  team-b CPU 58m left , let us create one more pod 
[root@ip-10-0-0-29 ~]# vi pod.yml
[root@ip-10-0-0-29 ~]# kubectl create -f pod.yml
pod/team-b created
[root@ip-10-0-0-29 ~]# kubectl get pod
NAME     READY   STATUS    RESTARTS   AGE
team-a   1/1     Running   0          7m52s
team-b   1/1     Running   0          15s
[root@ip-10-0-0-29 ~]# kubectl describe ns dev
Name:         dev
Labels:       kubernetes.io/metadata.name=dev
Annotations:  <none>
Status:       Active
Resource Quotas
  Name:            myreq
  Resource         Used  Hard
  --------         ---   ---
  limits.cpu       400m  1
  limits.memory    40Mi  1500Mi
  requests.cpu     200m  258m
  requests.memory  20Mi  500Mi

Step9: Creating more pod ,getting quora exceeded, now increase the request quota

[root@ip-10-0-0-29 ~]# vi pod.yml
[root@ip-10-0-0-29 ~]# kubectl create -f pod.yml
Error from server (Forbidden): error when creating "pod.yml": pods "team-c" is forbidden: exceeded quota: myreq, requested: requests.cpu=100m, used: requests.cpu=200m, limited: requests.cpu=258m
[root@ip-10-0-0-29 ~]#

                                                    Increase the request quota
Step1:increasing cpu 258 to 400m
[root@ip-10-0-0-29 ~]# cat realtime.yaml
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: myreq
spec:
  hard:
    requests.cpu: "400m"
    requests.memory: "500Mi"
    limits.cpu: "1000m"
    limits.memory: "1500Mi"

[root@ip-10-0-0-29 ~]# kubectl apply -f realtime.yaml

Warning: resource resourcequotas/myreq is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.

resourcequota/myreq configured

--See below resource quota increased

[root@ip-10-0-0-29 ~]# kubectl describe ns dev
Name:         dev
Labels:       kubernetes.io/metadata.name=dev
Annotations:  <none>
Status:       Active

Resource Quotas
  Name:            myreq
  Resource         Used  Hard
  --------         ---   ---
  limits.cpu       400m  1
  limits.memory    40Mi  1500Mi
  requests.cpu     200m  400m
  requests.memory  20Mi  500Mi

[root@ip-10-0-0-29 ~]# kubectl create -f pod.yml
pod/team-c created
[root@ip-10-0-0-29 ~]# kubectl get po
NAME     READY   STATUS    RESTARTS   AGE
team-a   1/1     Running   0          19m
team-b   1/1     Running   0          11m
team-c   1/1     Running   0          18s

--Deleted all pod Resourcequota name dev quota released 
[root@ip-10-0-0-29 ~]# kubectl delete pod --all
pod "team-a" deleted
pod "team-b" deleted
pod "team-c" deleted
[root@ip-10-0-0-29 ~]# kubectl describe ns dev
Name:         dev
Labels:       kubernetes.io/metadata.name=dev
Annotations:  <none>
Status:       Active

Resource Quotas
  Name:            myreq
  Resource         Used  Hard
  --------         ---   ---
  limits.cpu       0     1
  limits.memory    0     1500Mi
  requests.cpu     0     400m
  requests.memory  0     500Mi

--This is my resource quota 
[root@ip-10-0-0-29 ~]# kubectl get ResourceQuota
NAME    REQUEST                                          LIMIT                                      AGE
myreq   requests.cpu: 0/400m, requests.memory: 0/500Mi   limits.cpu: 0/1, limits.memory: 0/1500Mi   30m

PROBES 
Container and container health checks purpose we are using probs 

PROBES: used to determine the health and readiness of containers running within pods. Probes
are 3 types:
1.Readiness probes are used to indicate when a container is ready to receive traffic.
2.Liveness probes are used to determine whether a container is still running and responding to
requests.
3.Startup Probe are used to determines whether the application within the container has
started successfully. It's used to delay the liveness and readiness probes until the application
is ready to handle traffic

WHY PROBES?
  • In K8s, its common to scale up and scale down the pods. But when the pod is created newly it
  • will take some time to start the container and run the applications.
  • If a pod is not ready to receive traffic, it may receive requests that it cannot handle, that
  • causes downtime for our application.
  • Similarly, if a container is not running correctly, it may not be able to respond to requests,
  • resulting in the pod being terminated and replaced with a new one.
  • To overcome this issues, we are using Probes.
TYPES OF PROBES:    
There are several types of probes that can be used in Kubernetes which includes HTTP, TCP, and
command probes

Practical:
Real time need to check application running or not using curl  5 sec wait for 30 sec, it application still not responding, container will recreate.
Step1:

[root@ip-10-0-0-29 ~]# cat pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: team-a
spec:
  containers:
    - name: container1
      image: nginx
      ports:
        - containerPort: 80
      args:
        - /bin/bash
        - -c
        - touch /tmp/ccit; sleep 10000
      livenessProbe:
        exec:
          command:
            - cat
            - /tmp/ccit
        initialDelaySeconds: 5
        periodSeconds: 5
        timeoutSeconds: 30
      resources:
        requests:
          memory: "10Mi"
          cpu: "100m"
        limits:
          memory: "20Mi"
          cpu: "200m"
[root@ip-10-0-0-29 ~]# kubectl create -f pod.yml
pod/team-a created

--See here Livness probe failed after 31 second create new container 
[root@ip-10-0-0-29 ~]# kubectl describe pod team-a
Name:             team-a
Namespace:        dev
Priority:         0
Service Account:  default
Node:             minikube/192.168.49.2
 bytes.
  Warning  Unhealthy  76s (x3 over 86s)    kubelet            Liveness probe failed: cat: /tmp/ccit: No such file or directory
  Normal   Killing    76s                  kubelet            Container container1 failed liveness probe, will be restarted
  Normal   Pulling    46s (x2 over 3m56s)  kubelet            Pulling image "nginx"
  Normal   Created    45s (x2 over 3m55s)  kubelet            Created container: container1
  Normal   Started    45s (x2 over 3m55s)  kubelet            Started container container1
  Normal   Pulled     45s                  kubelet            Successfully pulled image "nginx" in 795ms (795ms inc

Step2: Delete ccit file one more time ,1 second pull the image 0 second created one more container
every 5 second it will send the request if 30 seconds not respond ,create new container.

 Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  9m41s                default-scheduler  Successfully assigned dev/team-a to minikube
  Normal   Pulled     9m40s                kubelet            Successfully pulled image "nginx" in 786ms (786ms including waiting). Image size: 192385800 bytes.
  Normal   Pulled     6m30s                kubelet            Successfully pulled image "nginx" in 795ms (795ms including waiting). Image size: 192385800 bytes.
  Warning  Unhealthy  31s (x6 over 7m11s)  kubelet            Liveness probe failed: cat: /tmp/ccit: No such file or directory
  Normal   Killing    31s (x2 over 7m1s)   kubelet            Container container1 failed liveness probe, will be restarted
  Normal   Pulling    1s (x3 over 9m41s)   kubelet            Pulling image "nginx"
  Normal   Created    0s (x3 over 9m40s)   kubelet            Created container: container1
  Normal   Started    0s (x3 over 9m40s)   kubelet            Started container container1
  Normal   Pulled     0s                   kubelet            Successfully pulled image "nginx" in 798ms (798ms including waiting). Image size: 192385800 bytes.

So far deployment these are completed.
Deploy
   pods 
     containers 
   name,images.ports
   resources,volume,cm & sec & probes


--Thanks 

Wednesday, August 20, 2025

Kubernetes part8

Kubernetes part8    

Class 93 Kubernetes Part8 August 21st

Topic: Config maps ,Secrete

Config maps : non-sensitive information  pod (container):   

We will create config map file like this{port:300 ,  url:myql.com } attached to pod using those key pair value container will use them.

Secrets: sensitive information user and password ,  Secrets(encrypt)-->pod(container) 

How many ways to CM will create

1.Literal(through CLI)

2.env-file

3.folder(not using)

 (above two imperative)

4. manifest file (Declarative )

Practical:

Step1: Create one Ec2 machine amazon linux C7i-flex-large  25 Gb

Minikube installation

[root@ip-10-0-0-29 ~]# curl -LO https://github.com/kubernetes/minikube/releases/latest/download/minikube-linux-amd64

sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64

[root@ip-10-0-0-29 bin]# chmod 777  minikube

[root@ip-10-0-0-29 bin]# minikube version

minikube version: v1.36.0

commit: f8f52f5de11fc6ad8244afac475e1d0f96841df1-dirty

[root@ip-10-0-0-29 bin]#

Docker Installation

[root@ip-10-0-0-29 bin]# yum install docker -y

[root@ip-10-0-0-29 bin]# systemctl start docker

[root@ip-10-0-0-29 bin]# minikube start --driver=docker --force

[root@ip-10-0-0-29 bin]# minikube status

minikube

type: Control Plane

host: Running

kubelet: Running

apiserver: Running

kubeconfig: Configured

Kubectl Installation

[root@ip-10-0-0-29 bin]# curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100   138  100   138    0     0   1240      0 --:--:-- --:--:-- --:--:--  1243

100 57.3M  100 57.3M    0     0   114M      0 --:--:-- --:--:-- --:--:--  126M

-rwxrwxrwx. 1 root root 132766301 Aug 21 18:09 minikube

-rw-r--r--. 1 root root  60129464 Aug 21 18:18 kubectl

[root@ip-10-0-0-29 bin]# chmod +x kubectl

[root@ip-10-0-0-29 bin]# kubectl version

Client Version: v1.33.4

Kustomize Version: v5.6.0

Server Version: v1.33.1

Step2: Below one is by default config map file

[root@ip-10-0-0-29 bin]# kubectl get cm

NAME               DATA   AGE

kube-root-ca.crt   1      16m

If you want see this config map file ,it has some certificate when do kubernete setup it will come default
[root@ip-10-0-0-29 bin]# kubectl describe cm kube-root-ca.crt
Name:         kube-root-ca.crt
Namespace:    default
-----BEGIN CERTIFICATE-----

Step3: using literal create one configmap file 

[root@ip-10-0-0-29 bin]# kubectl create cm mycm1 --from-literal name=subbu --from-literal course=Devops --from-literal cloud=aws

configmap/mycm1 created

[root@ip-10-0-0-29 bin]# kubectl get cm

NAME               DATA   AGE

kube-root-ca.crt   1      22m

mycm1              3      10s

[root@ip-10-0-0-29 bin]# kubectl describe cm mycm1
Name:         mycm1
Namespace:    default
Labels:       <none>
Annotations:  <none>
Data
====
cloud:
----
aws
course:
----
Devops
name:
----
subbu

Step4:Create manifest file for pod creation
Here Environment --name:person is variable Key is name value assigned to the person variable
--Created one more config map file 

[root@ip-10-0-0-29 bin]# kubectl create cm mycm2 --from-literal company=tcs  --from-literal project=swiggy

configmap/mycm2 created

[root@ip-10-0-0-29 bin]# cat manifest.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-1
spec:
  containers:
    - name: container1
      image: nginx
      ports:
        - containerPort: 80
      env:
        - name: person
          valueFrom:
            configMapKeyRef:
              key: name
              name: mycm1
        - name: secondvalue
          valueFrom:
            configMapKeyRef:
              key: cloud
              name: mycm1
        - name: client
          valueFrom:
            configMapKeyRef:
              key: project
              name: mycm2
Step5: Create the pod

[root@ip-10-0-0-29 ~]# kubectl create -f manifest.yaml

pod/pod-1 created

[root@ip-10-0-0-29 ~]# kubectl get pod

NAME    READY   STATUS    RESTARTS   AGE

pod-1   1/1     Running   0          9s

-- Go to inside the pod

[root@ip-10-0-0-29 ~]# kubectl exec -it pod-1 -- bash
root@pod-1:/#
-- printenv command shows the all environment variables for the system
   all key value pair from coming from mycm1 and mycm2

root@pod-1:/# printenv

KUBERNETES_SERVICE_PORT_HTTPS=443

KUBERNETES_SERVICE_PORT=443

HOSTNAME=pod-1

PWD=/

person=subbu

PKG_RELEASE=1~bookworm

secondvalue=aws

HOME=/root

KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443

client=swiggy

DYNPKG_RELEASE=1~bookworm

NJS_VERSION=0.9.1

TERM=xterm

SHLVL=1

KUBERNETES_PORT_443_TCP_PROTO=tcp

KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1

KUBERNETES_SERVICE_HOST=10.96.0.1

KUBERNETES_PORT=tcp://10.96.0.1:443

KUBERNETES_PORT_443_TCP_PORT=443

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

NGINX_VERSION=1.29.1

NJS_RELEASE=1~bookworm

_=/usr/bin/printenv

 Now Planning to complete config map file attach the pod (mycm2)
  it has two key value pair company=tcs project=swiggy

Step1:

[root@ip-10-0-0-29 ~]# cat manifest.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-1
spec:
  containers:
    - name: container1
      image: nginx
      ports:
        - containerPort: 80
      envFrom:
        - configMapRef:
              name: mycm2
Step2: Deleting existing po
[root@ip-10-0-0-29 ~]# kubectl create -f manifest.yaml
pod/pod-1 created
[root@ip-10-0-0-29 ~]# kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
pod-1   1/1     Running   0          7s

[root@ip-10-0-0-29 ~]# kubectl create -f manifest.yaml
pod/pod-1 created
[root@ip-10-0-0-29 ~]# kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
pod-1   1/1     Running   0          7s

--Go to inside the pod ,see here complete configmap file loaded into environment file
[root@ip-10-0-0-29 ~]# kubectl exec -it pod-1 -- bash
root@pod-1:/# printenv
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
project=swiggy
HOSTNAME=pod-1
PWD=/
PKG_RELEASE=1~bookworm
HOME=/root
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
DYNPKG_RELEASE=1~bookworm
NJS_VERSION=0.9.1
TERM=xterm
company=tcs
SHLVL=1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NGINX_VERSION=1.29.1
NJS_RELEASE=1~bookworm
_=/usr/bin/printenv

2.env-file

Step1: Create one env file 

[root@ip-10-0-0-29 ~]# vi one.env

[root@ip-10-0-0-29 ~]# cat one.env

app=swiggy

env=dev

team=devops

api=http://www.amazon.com

port:3000

Step2:Create configmap one config file we have place maximum size 1 MB for one config map file,

if required more then that kubernete volume we can use.

[root@ip-10-0-0-29 ~]# kubectl create cm amazon  --from-env-file=one.env

configmap/amazon created

[root@ip-10-0-0-29 ~]# kubectl get cm
NAME               DATA   AGE
amazon             5      27s
kube-root-ca.crt   1      69m
mycm1              3      47m
mycm2              2      32m
Step3: attach the config map to pod ,just change the name of configmap name
[root@ip-10-0-0-29 ~]# cat manifest.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-2
spec:
  containers:
    - name: container1
      image: nginx
      ports:
        - containerPort: 80
      envFrom:
        - configMapRef:
              name: amazon

[root@ip-10-0-0-29 ~]# kubectl create -f manifest.yaml
pod/pod-2 created
[root@ip-10-0-0-29 ~]# kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
pod-1   1/1     Running   0          14m
pod-2   1/1     Running   0          9s
Step4: See below compelet confimap file loaded into envirornment file, general it is used application and database connect we are using this config map envfile
[root@ip-10-0-0-29 ~]# kubectl exec -it pod-2 -- bash
root@pod-2:/# printenv
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
env=dev
HOSTNAME=pod-2
PWD=/
app=swiggy
PKG_RELEASE=1~bookworm
api=http://www.amazon.com
HOME=/root
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
port=3000
DYNPKG_RELEASE=1~bookworm
team=devops
NJS_VERSION=0.9.1
TERM=xterm
SHLVL=1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NGINX_VERSION=1.29.1
NJS_RELEASE=1~bookworm
_=/usr/bin/printenv

Delete the config map 
[root@ip-10-0-0-29 ~]# kubectl get cm
NAME               DATA   AGE
amazon             5      12m
kube-root-ca.crt   1      81m
mycm1              3      59m
mycm2              2      44m
[root@ip-10-0-0-29 ~]# kubectl delete cm mycm1
configmap "mycm1" deleted
[root@ip-10-0-0-29 ~]# kubectl delete cm mycm2
configmap "mycm2" deleted
[root@ip-10-0-0-29 ~]# kubectl delete cm amazon
configmap "amazon" deleted

4. manifest file (Declarative )
Step1: Create manifest file  we will give key value format, create the configmap,number should double quotes , other wise getting error unmarshal number 

[root@ip-10-0-0-29 ~]# vi configmapdec.yaml
[root@ip-10-0-0-29 ~]# cat configmapdec.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: finalcm
data:
  DB_URL: http://www.mysql.com
  PORT: "3306"
Step2:create the configmap
[root@ip-10-0-0-29 ~]# kubectl create -f configmapdec.yaml
configmap/finalcm created
[root@ip-10-0-0-29 ~]# kubectl get cm
NAME               DATA   AGE
finalcm            2      10s
kube-root-ca.crt   1      92m

                                                      Secrets
Literal
Step1: Create secrets
[root@ip-10-0-0-29 ~]# kubectl get secret
No resources found in default namespace.

[root@ip-10-0-0-29 ~]# kubectl create secret generic mysec1  --from-literal username=subbu --from-literal password=admin123
secret/mysec1 created
[root@ip-10-0-0-29 ~]# kubectl get secret
NAME     TYPE     DATA   AGE
mysec1   Opaque   2      47s

--See here values are encrypted ,only showing bytes
[root@ip-10-0-0-29 ~]# kubectl describe secret mysec1
Name:         mysec1
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  8 bytes
username:  5 bytes
--As you need below user and password encrypted 
[root@ip-10-0-0-29 ~]# kubectl describe secret mysec1 yaml

[root@ip-10-0-0-29 ~]# kubectl get secret mysec1 -o yaml

apiVersion: v1

data:

  password: YWRtaW4xMjM=

  username: c3ViYnU=

kind: Secret

metadata:

  creationTimestamp: "2025-08-21T19:52:34Z"

  name: mysec1

  namespace: default

  resourceVersion: "4999"

  uid: d0dcb7ed-897d-4e12-bc79-21d9c171950e

type: Opaque

Env file :Encrpt

[root@ip-10-0-0-29 ~]# kubectl create secret generic envcmfile --from-env-file=one.env
secret/envcmfile created
[root@ip-10-0-0-29 ~]# kubectl get secret
NAME        TYPE     DATA   AGE
envcmfile   Opaque   5      11s
mysec1      Opaque   2      7m24s
[root@ip-10-0-0-29 ~]#
--See here envfile also encrypted 
[root@ip-10-0-0-29 ~]# kubectl get secret envcmfile -o yaml
apiVersion: v1
data:
  api: aHR0cDovL3d3dy5hbWF6b24uY29t
  app: c3dpZ2d5
  env: ZGV2
  port: MzAwMA==
  team: ZGV2b3Bz
kind: Secret
metadata:
  creationTimestamp: "2025-08-21T19:59:47Z"
  name: envcmfile
  namespace: default
  resourceVersion: "5345"
  uid: 04a0cd0c-d124-4035-9c3b-ec0649bd1bca

Manifest file

Step1:

[root@ip-10-0-0-29 ~]# cat manifest1.yaml
---
apiVersion: v1
kind: Secret
metadata:
  name: finalsecret
data:
  key: apivalues
  pass: admin@123

Step2:here we have given direct value in the above manifest1 file it should be encrypted data
[root@ip-10-0-0-29 ~]# kubectl create -f manifest1.yaml
Error from server (BadRequest): error when creating "manifest1.yaml": Secret in version "v1" cannot be handled as a Secret: illegal base64 data at input byte 8
--See apivalues key we encrupted the data ,now we need give encrypted value to manifest1 file
[root@ip-10-0-0-29 ~]# echo -n "apivalues" | base64
YXBpdmFsdWVz
[root@ip-10-0-0-29 ~]# echo -n "admin@123" | base64
YWRtaW5AMTIz

data:
  key: YXBpdmFsdWVz
  pass: YWRtaW5AMTIz

Step3:
[root@ip-10-0-0-29 ~]# vi manifest1.yaml
[root@ip-10-0-0-29 ~]# kubectl create -f manifest1.yaml
secret/finalsecret created
[root@ip-10-0-0-29 ~]# kubectl get secret
NAME          TYPE     DATA   AGE
envcmfile     Opaque   5      13m
finalsecret   Opaque   2      35s
mysec1        Opaque   2      20m

Step4: now create the pod
[root@ip-10-0-0-29 ~]# vi manifest.yaml
[root@ip-10-0-0-29 ~]# cat manifest.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-3
spec:
  containers:
    - name: container1
      image: nginx
      ports:
        - containerPort: 80
      envFrom:
        - secretRef:
              name: finalsecret

[root@ip-10-0-0-29 ~]# kubectl create -f manifest.yaml
Error from server (BadRequest): error when creating "manifest.yaml": Pod in version "v1" cannot be handled as a Pod: strict decoding error: unknown field "spec.containers[0].envFrom[0].SecretRef"
[root@ip-10-0-0-29 ~]#
-- pod3 created 
[root@ip-10-0-0-29 ~]# kubectl create -f manifest.yaml
pod/pod-3 created
[root@ip-10-0-0-29 ~]# kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
pod-1   1/1     Running   0          63m
pod-2   1/1     Running   0          48m
pod-3   1/1     Running   0          12s

Step5: Go to inside the pod ,as you out side configuremap file encrypted the data,but pod it will showing data

[root@ip-10-0-0-29 ~]# kubectl exec -id pod-3  -- bash
error: unknown shorthand flag: 'd' in -d
See 'kubectl exec --help' for usage.
[root@ip-10-0-0-29 ~]# kubectl exec -it pod-3  -- bash
root@pod-3:/# printenv
KUBERNETES_SERVICE_PORT_HTTPS=443
pass=admin@123
KUBERNETES_SERVICE_PORT=443
HOSTNAME=pod-3
PWD=/
PKG_RELEASE=1~bookworm
HOME=/root
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
DYNPKG_RELEASE=1~bookworm
NJS_VERSION=0.9.1
TERM=xterm
key=apivalues
 
                                                                   Task 
 Docker hub repository create one private repository push the one image 
 Private repository ,trying to create pod getting error 

Step1: See here getting error 

[root@ip-10-0-0-29 ~]# vi manifest.yaml
[root@ip-10-0-0-29 ~]# cat manifest.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-4
spec:
  containers:
    - name: container1
      image: vakatisubbu/movie1:latest

[root@ip-10-0-0-29 ~]# kubectl create -f manifest.yaml
pod/pod-4 created
[root@ip-10-0-0-29 ~]# kubectl get pod
NAME    READY   STATUS         RESTARTS   AGE
pod-1   1/1     Running        0          79m
pod-2   1/1     Running        0          65m
pod-3   1/1     Running        0          16m
pod-4   0/1     ErrImagePull   0          9s

Step2: Describer the pod , as you see below credential issue, we need choose secrete
1.need create one secrete --in that docker username password 
2.For secrete need to give in pod and create the pod

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  65s                default-scheduler  Successfully assigned default/pod-4 to minikube
  Normal   Pulling    22s (x3 over 64s)  kubelet            Pulling image "shaikmustafa77/myprivaterepo:latest"
  Warning  Failed     21s (x3 over 63s)  kubelet            Failed to pull image "shaikmustafa77/myprivaterepo:latest": Error response from daemon: pull access denied for shaikmustafa77/myprivaterepo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
  Warning  Failed     21s (x3 over 63s)  kubelet            Error: ErrImagePull
  Normal   BackOff    7s (x3 over 62s)   kubelet            Back-off pulling image "shaikmustafa77/myprivaterepo:latest"
  Warning  Failed     7s (x3 over 62s)   kubelet            Error: ImagePullBackOff

 Step3: literal ,secret creation 
[root@ip-10-0-0-29 ~]# kubectl create secret docker-registry dockersecret --docker-server=docker.io --docker-username=vakatisubbu --docker-password=Jan20@2015 --docker-email=vakati.subbu@gmail.com
secret/dockersecret created

--New secret created successfully

[root@ip-10-0-0-29 ~]# kubectl get secret
NAME           TYPE                             DATA   AGE
dockersecret   kubernetes.io/dockerconfigjson   1      21s
envcmfile      Opaque                           5      13h
finalsecret    Opaque                           2      13h
mysec1         Opaque                           2      13h
--See here Secret encrypted 
[root@ip-10-0-0-29 ~]# kubectl get  secret dockersecret -o yaml
apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuaW8iOnsidXNlcm5hbWUiOiJ2YWthdGlzdWJidSIsInBhc3N3b3JkIjoiSmFuMjBAMjAxNSIsImVtYWlsIjoidmFrYXRpLnN1YmJ1QGdtYWlsLmNvbSIsImF1dGgiOiJkbUZyWVhScGMzVmlZblU2U21GdU1qQkFNakF4TlE9PSJ9fX0=
kind: Secret
metadata:
  creationTimestamp: "2025-08-22T09:47:36Z"
  name: dockersecret
  namespace: default
  resourceVersion: "12738"
  uid: 4c6776da-794f-45bc-9526-ffdf678970d5
type: kubernetes.io/dockerconfigjson

Step4:

[root@ip-10-0-0-29 ~]# cat manifest.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-4
spec:
  containers:
    - name: container1
      image: vakatisubbu/movie1:latest
  imagePullSecrets:
    - name: dockersecret
[root@ip-10-0-0-29 ~]# kubectl create -f manifest.yaml
pod/pod-4 created
[root@ip-10-0-0-29 ~]# kubectl get pod
NAME    READY   STATUS                       RESTARTS      AGE
pod-1   0/1     CreateContainerConfigError   0             14h
pod-2   0/1     CreateContainerConfigError   0             14h
pod-3   1/1     Running                      1 (13h ago)   13h
pod-4   1/1     Running                      0             7s

--Describe for pod-4
[root@ip-10-0-0-29 ~]# kubectl describe pod pod-4
Name:             pod-4
Namespace:        default

Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  3m     default-scheduler  Successfully assigned default/pod-4 to minikube
  Normal  Pulling    3m     kubelet            Pulling image "vakatisubbu/movie1:latest"
  Normal  Pulled     2m54s  kubelet            Successfully pulled image "vakatisubbu/movie1:latest" in 5.409s (5.41s including waiting). Image size: 244571564 bytes.
  Normal  Created    2m54s  kubelet            Created container: container1
  Normal  Started    2m54s  kubelet            Started container container1


[root@ip-10-0-0-29 ~]# kubectl get pods -o wide
NAME    READY   STATUS                       RESTARTS      AGE     IP            NODE       NOMINATED NODE   READINESS GATES
pod-3   1/1     Running                      1 (13h ago)   13h     10.244.0.9    minikube   <none>           <none>
pod-4   1/1     Running                      0             7m59s   10.244.0.14   minikube   <none>           <none>

--Create  the service 

[root@ip-10-0-0-29 ~]# cat movie-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: movie-service
spec:
  selector:
    app: movie-app  # This must match your pod's label
  ports:
    - port: 80        # Service port
      targetPort: 80  # Container port (where Apache runs)
  type: NodePort      # Makes it accessible from outside

[root@ip-10-0-0-29 ~]# kubectl get svc
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP   10.96.0.1      <none>        443/TCP        19h
movie-service   NodePort    10.109.52.96   <none>        80:30361/TCP   66m

--See here url is responding 

[root@ip-10-0-0-29 ~]# kubectl get service movie-service
NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
movie-service   NodePort   10.109.52.96   <none>        80:30361/TCP   12m
[root@ip-10-0-0-29 ~]# minikube service movie-service --url
http://192.168.49.2:30361
[root@ip-10-0-0-29 ~]# curl http://192.168.49.2:30361
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {font-family: Arial, Helvetica, sans-serif;}

/* Full-width input fields */

--Thanks 

Tuesday, August 19, 2025

Kubernetes part7

Kubernetes part7

Class 91 Kubernetes Part7 August 19th 

Kubernetes volumes
Step1: Create Ec2 instance ubuntu 25 GB disk space c7.large
Installations
ubuntu@ip-10-0-0-23:~$ sudo -i
root@ip-10-0-0-23:~# apt update -y

minikuber installafirst need to Install docker 

Docker Installation :

root@ip-10-0-0-23:~# apt install docker.io -y

root@ip-10-0-0-23:~# systemctl status docker

Minikube installation:

curl -LO https://github.com/kubernetes/minikube/releases/latest/download/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64


root@ip-10-0-0-23:~# minikube version
minikube version: v1.36.0
commit: f8f52f5de11fc6ad8244afac475e1d0f96841df1-dirty

root@ip-10-0-0-23:~# ll /usr/local/bin
total 129668
drwxr-xr-x  2 root root      4096 Aug 20 13:44 ./
drwxr-xr-x 10 root root      4096 Jun 10 10:00 ../
-rwxr-xr-x  1 root root 132766301 Aug 20 13:44 minikube*
Start the minikube
root@ip-10-0-0-23:~# minikube start --driver=docker --force

root@ip-10-0-0-23:~# minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

Installation Kubectl:
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
root@ip-10-0-0-23:~# curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
root@ip-10-0-0-23:~# chmod 777 kubectl
root@ip-10-0-0-23:~# mv kubectl /usr/local/bin/
root@ip-10-0-0-23:~# kubectl version
Client Version: v1.33.4
Kustomize Version: v5.6.0
Server Version: v1.33.1

Kubernetes Volumes:
 Docker volume: data replicate from one container to another container
 pods are ephemeral temporary store

K8’s volume and liveness probes:

  • Basically these K8’s will works on short living data.So let unveil the power of volumes like emptyDir,HostPath, PV and PVC .
  •  The data is a very important thing for an application. IN K8’s,data is kept for a short time in the applications in the pods/containers.By default the data will no longer available. To overcome this we will use kubernetes volumnes.
  • But before going into the types of volumes,Let’s understand some facts about pods and containers short live data

Scenario1: multiple container with a pod can share one volume because the volume is attached to pod 
Scenario2: If pod deleted container and volume also deleted ,new pod will create using replicate controller and it will create new volume but previous data will loss.
to over come this issue we are using Kubernetes Volumes 

Types of Volumes 
1.EmptyDir 
2.HostPath
3.Persist Volume
4.Persist Volume claim(PVC)

1.    Empty Dir

This Volume is user to share the data between multiple containers within a pod instead of the host machine or any master/workernode.

 EmptyDir volume is created when the pod is create and it exists as long as a pod

There no data available in the emptydir volume type when it is create for first

Container within the pod can access the other container data. However the mount path can be different for each container

If the containers ger crashed the, that will still persist and can be accessible by other or newly create containers.

Pod:pod is group of container, (for ex:-a pod has one container1 along with one more container default exist helper container we also called sidecar container, log information backup multiple purpose use),so the reason command give below script, once container create it will exit from the container, so that is the reason we give here while loop

Practical:    Here below script two containers creating and attach the volume, and mount volume in this path /opt/jenkins, if you are using multiple container ,you need specify any command other wise  pod will never create.
 
Step1:
root@ip-10-0-0-23:~# vi manifest.yaml
root@ip-10-0-0-23:~# cat manifest.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydepl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: swiggy
  template:    # Moved to top-level under spec
    metadata:
      labels:
        app: swiggy  # Must match selector
    spec:
      containers:
      - name: container1
        image: nginx
        command: ["/bin/bash", "-c", "while true; do echo welcome to DevOps class; sleep 5; done"]
        volumeMounts:
          - name: myvolume
            mountPath: "/opt/jenkins"
        ports:
        - containerPort: 80
      - name: container2
        image: nginx
        command: ["/bin/bash", "-c", "while true; do echo welcome to DevOps class; sleep 5; done"]
        volumeMounts:
          - name: myvolume
            mountPath: "/etc/docker"

      volumes:
      - name: myvolume
        emptyDir: {}

root@ip-10-0-0-23:~# vi  manifest.yaml
root@ip-10-0-0-23:~# kubectl create -f manifest.yaml
deployment.apps/mydepl created
root@ip-10-0-0-23:~# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
mydepl-5d6645bd5b-qg2g7   2/2     Running   0          12s
Step2: Need to check volume mouted or not 

We have taken emptydir volume, so not data exists , just create one file 
root@ip-10-0-0-23:~# kubectl exec -it mydepl-5d6645bd5b-qg2g7 -c container1 -- bash
root@mydepl-5d6645bd5b-qg2g7:/# cd /opt/jenkins
root@mydepl-5d6645bd5b-qg2g7:/opt/jenkins# ls

Create some file and directory , it will copy automatically another container:
root@mydepl-5d6645bd5b-qg2g7:/opt/jenkins# touch app.py data.dat conta.pdf
root@mydepl-5d6645bd5b-qg2g7:/opt/jenkins# mkdir tcs infosys
root@mydepl-5d6645bd5b-qg2g7:/opt/jenkins# ls -lrt
total 8
-rw-r--r-- 1 root root    0 Aug 20 14:48 data.dat
-rw-r--r-- 1 root root    0 Aug 20 14:48 conta.pdf
-rw-r--r-- 1 root root    0 Aug 20 14:48 app.py
drwxr-xr-x 2 root root 4096 Aug 20 14:48 tcs
drwxr-xr-x 2 root root 4096 Aug 20 14:48 infosys

Container2 automatically exist 
root@ip-10-0-0-23:~# kubectl exec -it mydepl-5d6645bd5b-qg2g7 -c container2 -- bash
root@mydepl-5d6645bd5b-qg2g7:/# cd /etc/docker
root@mydepl-5d6645bd5b-qg2g7:/etc/docker# ls -lrt
total 8
-rw-r--r-- 1 root root    0 Aug 20 14:48 data.dat
-rw-r--r-- 1 root root    0 Aug 20 14:48 conta.pdf
-rw-r--r-- 1 root root    0 Aug 20 14:48 app.py
drwxr-xr-x 2 root root 4096 Aug 20 14:48 tcs
drwxr-xr-x 2 root root 4096 Aug 20 14:48 infosys

Container2 create some files 
root@mydepl-5d6645bd5b-qg2g7:/etc/docker# touch jenkis.yaml docker.yaml subbu.txt
root@mydepl-5d6645bd5b-qg2g7:/etc/docker#
exit
command terminated with exit code 130
Container1 file came to automatically
root@ip-10-0-0-23:~# kubectl exec -it mydepl-5d6645bd5b-qg2g7 -c container1 -- bash
root@mydepl-5d6645bd5b-qg2g7:/# cd /opt/jenkins
root@mydepl-5d6645bd5b-qg2g7:/opt/jenkins# ls -lrt
total 8
-rw-r--r-- 1 root root    0 Aug 20 14:48 data.dat
-rw-r--r-- 1 root root    0 Aug 20 14:48 conta.pdf
-rw-r--r-- 1 root root    0 Aug 20 14:48 app.py
drwxr-xr-x 2 root root 4096 Aug 20 14:48 tcs
drwxr-xr-x 2 root root 4096 Aug 20 14:48 infosys
-rw-r--r-- 1 root root    0 Aug 20 14:51 subbu.txt
-rw-r--r-- 1 root root    0 Aug 20 14:51 jenkis.yaml
-rw-r--r-- 1 root root    0 Aug 20 14:51 docker.yaml

Step3: above example until pod running container to container replicate is doing , if pod delete
new pod will create but volume data will loss 

--Pod deleted 
root@ip-10-0-0-23:~# kubectl delete pod mydepl-5d6645bd5b-qg2g7
pod "mydepl-5d6645bd5b-qg2g7" deleted
-- Automatically new pod created 
root@ip-10-0-0-23:~# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
mydepl-5d6645bd5b-whtr8   2/2     Running   0          56s
--We will inside the container, volume no file exists
root@ip-10-0-0-23:~# kubectl exec -it mydepl-5d6645bd5b-whtr8 -c container1 -- bash
root@mydepl-5d6645bd5b-whtr8:/# cd /opt/jenkins/
root@mydepl-5d6645bd5b-whtr8:/opt/jenkins# ls
root@mydepl-5d6645bd5b-whtr8:/opt/jenkins#

container2 also data missing , we loss the data :

root@ip-10-0-0-23:~# kubectl exec -it mydepl-5d6645bd5b-whtr8 -c container2 -- bash
root@mydepl-5d6645bd5b-whtr8:/# cd /etc/docker
root@mydepl-5d6645bd5b-whtr8:/etc/docker# ls

Not over come the issue We are using Host path 

2.HostPath

This volume type is the advanced version of the previous volume type

Emptydir.

In EmptyDirmthe data is stored in the volumes that reside inside the pods only

Where the host machine doesn’t have the data of the pods and containers

Hostpath volume type helps to access the data of the pods or container volumes from the host machine.

Hostpath replicates the data of the volumes on the host machine and if you make the changes from the host machine then the changes will be reflected to the pods volumes(if attached)

Step1: delete existing deployment 

root@ip-10-0-0-23:~# kubectl delete deploy mydepl

deployment.apps "mydepl" deleted

Step2: previous manifest file only below line are changed for hostpath

  volumes:

      - name: myvolume

        hostPath:

          path: /tmp/mydata/

root@ip-10-0-0-23:~# kubectl create -f manifest.yaml
deployment.apps/mydepl created

Container1 some  files are created 

root@ip-10-0-0-23:~# kubectl exec -it mydepl-57f4bc8d46-7dkqm -c container1 -- bash
root@mydepl-57f4bc8d46-7dkqm:/# cd /opt/jenkins/
root@mydepl-57f4bc8d46-7dkqm:/opt/jenkins# touch app.py jenkins.yaml java.jvm
root@mydepl-57f4bc8d46-7dkqm:/opt/jenkins# ls
app.py  java.jvm  jenkins.yaml

Container 2 need to check files came or not 
root@ip-10-0-0-23:~# kubectl exec -it mydepl-57f4bc8d46-7dkqm -c container2 -- bash
root@mydepl-57f4bc8d46-7dkqm:/# cd /etc/docker/
root@mydepl-57f4bc8d46-7dkqm:/etc/docker# ls
app.py  java.jvm  jenkins.yaml

Step2: Delete the pod 
root@ip-10-0-0-23:~# kubectl delete pod mydepl-57f4bc8d46-7dkqm
pod "mydepl-57f4bc8d46-7dkqm" deleted

root@ip-10-0-0-23:~# kubectl delete mydepl-57f4bc8d46-7dkqm
error: the server doesn't have a resource type "mydepl-57f4bc8d46-7dkqm"
root@ip-10-0-0-23:~# kubectl delete pod mydepl-57f4bc8d46-7dkqm
pod "mydepl-57f4bc8d46-7dkqm" deleted
root@ip-10-0-0-23:~# kubectl get po
NAME                      READY   STATUS    RESTARTS   AGE
mydepl-57f4bc8d46-6fgfl   2/2     Running   0          46s
root@ip-10-0-0-23:~# kubectl exec -it mydepl-57f4bc8d46-6fgfl -c container1 -- bash
root@mydepl-57f4bc8d46-6fgfl:/# cd /opt/jenkins/
root@mydepl-57f4bc8d46-6fgfl:/opt/jenkins# ls
app.py  java.jvm  jenkins.yaml
root@mydepl-57f4bc8d46-6fgfl:/opt/jenkins#
exit
root@ip-10-0-0-23:~# kubectl exec -it mydepl-57f4bc8d46-6fgfl -c container2 -- bash
root@mydepl-57f4bc8d46-6fgfl:/# cd /etc/docker/
root@mydepl-57f4bc8d46-6fgfl:/etc/docker# ls
app.py  java.jvm  jenkins.yaml
root@mydepl-57f4bc8d46-6fgfl:/etc/docker# touch test nexflex google
root@mydepl-57f4bc8d46-6fgfl:/etc/docker#
exit
root@ip-10-0-0-23:~# kubectl delete pod mydepl-57f4bc8d46-6fgfl
pod "mydepl-57f4bc8d46-6fgfl" deleted
root@ip-10-0-0-23:~# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
mydepl-57f4bc8d46-jww7m   2/2     Running   0          40s
root@ip-10-0-0-23:~# kubectl exec -it mydepl-57f4bc8d46-jww7m -c container1 -- bash
root@mydepl-57f4bc8d46-jww7m:/# cd /opt/jenkins/
root@mydepl-57f4bc8d46-jww7m:/opt/jenkins# ls
app.py  google  java.jvm  jenkins.yaml  nexflex  test

Here we did hostpath attached to node, whenevery pod delete it will create and attached volume from hostpath ,for single it will work, but real time we are using multi node, we have attached the hostpath single node,if multi node the pods not not created other node.

If Node deleted ,autoscale node also data will loss  to overcome this issue:

PV 
PVC 
We will store the data in cloud EBS storage for example take 100 GB  
Persistent volume (PV-1) 15 GB
Persistent volume (PV-2) 10 GB
Persistent volume (PV-3) 45 GB

We have left 30 GB left it is used for future 
Claim persistent volume 

I have  requirement to Pod creation, i need 13 GB ,for the we have claim that PVC (persistent volume claim) from PV ,PVC will check suitable PV either PV-1 ,PV2m PV-3 here PV1,PV3 suitable
Before pod create we should need create PVC

IF you are trying create Pod with 35 GB ,below case PVC searching for suitable PV there is not space available ,PVC went into pending state and also unable create po.


Step1:
EBS volume need to create ,based on the EBS-id need to create PV 
 Elastic Block store>Create volume >

my-app volume created

Step2: copy the volumeid vol-08360ae5933a3ef0a 

root@ip-10-0-0-23:~# vi pv.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-1
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  awsElasticBlockStore:
    volumeID: vol-08360ae5933a3ef0a
    fsType: ext4
  persistentVolumeReclaimPolicy: Recycle

Step3:
-- it is just warning Create one PV-1
root@ip-10-0-0-23:~# kubectl create -f  pv.yaml
Warning: spec.persistentVolumeReclaimPolicy: The Recycle reclaim policy is deprecated. Instead, the recommended approach is to use dynamic provisioning.
persistentvolume/pv-1 created

Step4: Create one more PV-2
root@ip-10-0-0-23:~# vi pv.yaml
root@ip-10-0-0-23:~# cat pv.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-2
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  awsElasticBlockStore:
    volumeID: vol-08360ae5933a3ef0a
    fsType: ext4
  persistentVolumeReclaimPolicy: Recycle

root@ip-10-0-0-23:~# kubectl create -f  pv.yaml
Warning: spec.persistentVolumeReclaimPolicy: The Recycle reclaim policy is deprecated. Instead, the recommended approach is to use dynamic provisioning.
persistentvolume/pv-2 created

Step5: list of pv's  two pvs created (pv-1,pv-2)
root@ip-10-0-0-23:~# kubectl get pv
root@ip-10-0-0-23:~#

Step6: Now claim the PVC ,write pvc file 

root@ip-10-0-0-23:~# vi pvc.yaml
root@ip-10-0-0-23:~# cat pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc-1
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

root@ip-10-0-0-23:~# kubectl create -f pvc.yaml
persistentvolumeclaim/mypvc-1 created

root@ip-10-0-0-23:~# kubectl create -f pvc.yaml
persistentvolumeclaim/mypvc-1 created
root@ip-10-0-0-23:~# kubectl get pvc
--Which we have create the volume

root@ip-10-0-0-23:~# kubectl get pv
we have claimed 3 gib  bound 

Step7: Now we need to attached this volume(PVC) to  po
Delete the existing deployment 
root@ip-10-0-0-23:~# kubectl get deploy
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
mydepl   1/1     1            1           148m
root@ip-10-0-0-23:~# kubectl delete deploy mydepl
deployment.apps "mydepl" deleted
root@ip-10-0-0-23:~# kubectl get deploy
No resources found in default namespace.

Step8: Create deployment with new volume, only need to change below highlighted lines 
root@ip-10-0-0-23:~# vi manifest.yaml
root@ip-10-0-0-23:~# cat manifest.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydepl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: swiggy
  template:    # Moved to top-level under spec
    metadata:
      labels:
        app: swiggy  # Must match selector
    spec:
      containers:
      - name: container1
        image: nginx
        command: ["/bin/bash", "-c", "while true; do echo welcome to DevOps class; sleep 5; done"]
        volumeMounts:
          - name: myvolume
            mountPath: "/opt/jenkins"
        ports:
        - containerPort: 80
      - name: container2
        image: nginx
        command: ["/bin/bash", "-c", "while true; do echo welcome to DevOps class; sleep 5; done"]
        volumeMounts:
          - name: myvolume
            mountPath: "/etc/docker"

      volumes:
      - name: myvolume
        persistentVolumeClaim:
          claimName: pvc-1


root@ip-10-0-0-23:~# kubectl create -f manifest.yaml
deployment.apps/mydepl created


Step8: Whether pod created or not 
root@ip-10-0-0-23:~# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
mydepl-7b4979dff6-m559p   0/2     Pending   0          47s

To identify the error ,here clearly showing pvc-1 not found ,we have given name PVC mypvc-1
root@ip-10-0-0-23:~# kubectl describe pod mydepl-7b4979dff6-m559p

Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  2m21s  default-scheduler  0/1 nodes are available: persistentvolumeclaim "pvc-1" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

Step9: Changed PCV name , after change configuration need to apply
persistentVolumeClaim:
          claimName: mypvc-1
root@ip-10-0-0-23:~# kubectl apply -f manifest.yaml
root@ip-10-0-0-23:~# kubectl apply -f manifest.yaml
Warning: resource deployments/mydepl is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
deployment.apps/mydepl configured

Now pod is running state 
root@ip-10-0-0-23:~# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
mydepl-587dc99798-9nr6z   2/2     Running   0          16s

root@ip-10-0-0-23:~# kubectl get pv


root@ip-10-0-0-23:~# kubectl get pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
mypvc-1   Bound    pvc-8998df85-79c6-4df4-9241-3987bebe8519   3Gi        RWO            standard       <unset>                 30m

Step10: Try to go inside the po and create some file and delete the pod 
root@ip-10-0-0-23:~# kubectl exec -it mydepl-587dc99798-9nr6z -c container1 -- bash
root@mydepl-587dc99798-9nr6z:/# cd /opt/jenkins/
root@mydepl-587dc99798-9nr6z:/opt/jenkins# ls

Created some dummy file 
root@mydepl-587dc99798-9nr6z:/opt/jenkins# touch app.py docker manifest.yaml pv
root@mydepl-587dc99798-9nr6z:/opt/jenkins# ls
app.py  docker  manifest.yaml  pv

Step11: lets delete the po ,new pod created automatically 

root@ip-10-0-0-23:~# kubectl get po
NAME                      READY   STATUS    RESTARTS   AGE
mydepl-587dc99798-9nr6z   2/2     Running   0          8m58s
root@ip-10-0-0-23:~# kubectl delete pod  mydepl-587dc99798-9nr6z
pod "mydepl-587dc99798-9nr6z" deleted
root@ip-10-0-0-23:~# kubectl get po
NAME                      READY   STATUS    RESTARTS   AGE
mydepl-587dc99798-smm8q   2/2     Running   0          58s

-- see here i have went into container2 ,our file are exists 

root@ip-10-0-0-23:~# kubectl exec -it mydepl-587dc99798-smm8q -c container2 -- bash
root@mydepl-587dc99798-smm8q:/# cd /etc/docker
root@mydepl-587dc99798-smm8q:/etc/docker# ls
app.py  docker  manifest.yaml  pv


EBS safe and secure the data ever though pod delete and cluster delete or node delete.

Step12: Make sure the after cluster delete need delete EBS volume also 

root@ip-10-0-0-23:~# kubectl get -- all
NAME                          READY   STATUS    RESTARTS   AGE
pod/mydepl-587dc99798-smm8q   2/2     Running   0          7m20s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   4h24m

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mydepl   1/1     1            1           24m

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/mydepl-587dc99798   1         1         1       16m
replicaset.apps/mydepl-7b4979dff6   0         0         0       24m
root@ip-10-0-0-23:~# kubectl delete deploy mydepl
deployment.apps "mydepl" deleted


Under standing the Access modes:

Access modes determine how many pods can access a persistent volume(PV) or a persistent volume claim(PVC) simulataneously.There are several access modes that can be set on PV or PVC,inclusing:

ReadWriteOnce:This access mode allows a single pod to read and write to the PV or PVC,this is the most common access mode,and its appropriate for use cases where a single pod needs exclusive access to the storage.

ReadOnlyMany:This access mode allows multiple pods to read from the PV or PVC,but does not allow any of them to write to it.This access mode is useful for cases where many pods need to read the same data,such as when servinf a read-only database.

ReadWritemany:This access mode allow multiple pods to read and write to the PV or PVC simultaneously. This mode is appropriate for use cases where many pods need to read and write to the same data ,such as a distributed file system.

Execute: this access mode allows the pod to execute the data on the PV or PVC but not read or write to it.This mode is useful for use cases where the data is meant to be executed by the pods only, such as application code.



--Thanks