Monday, August 18, 2025

Kubernetes part6

Kubernetes part6

Class 90th Kubernetes Part6 August 18th 

Deamonsets and statefulsets :

cluster all are similar(replication cluster,replication sets,deployment,deamonsets,statefulsets)

all are same manifest files 

Deamon-set:

  • It is used to run a copy a pod to each worker node.
  • It is used to perform some tasks like monitoring, Log collection etc.. that need to run on every node of the cluster.
  • A DaemonSet ensures that all eligible node run a copy of a pod
  • When you create daemon set k8s schedule on copy of pod in each node.
  • If we added new node it will copy the pod automatically to new node.
  • If I remove on node from cluster the pod in that node also removed 
  • Deleting a daemonset will clean up the pods it created
  • Daemon-set does not contains replicas,because by default it will create only 1 pod in each node.
  • Deamon sets will not used for deployments on real-time,it is only to schedule.

cluster(cluster is group of servers,master and worker)

demon-sets :single pods for each instance ,whenevery create new instance automatically pod will created.  each worker node we have to install one pod or agent pod will give data to permutes will send to Grafana for dashboard visualize

demon-sets only for monitoring and logging purpose not used for deployment 

Practical:

Step1:

[root@ip-10-0-0-28 ~]# aws s3 ls

2025-08-16 16:58:08 ccitpublicbucket16

[root@ip-10-0-0-28 ~]#

[root@ip-10-0-0-28 ~]# export KOPS_STATE_STORE=s3://ccitpublicbucket16

[root@ip-10-0-0-28 ~]# kops create cluster --name kopsclstr.k8s.local --zones eu-west-2b,eu-west-2a --master-count 1 --master-size c7i-flex.large --master-volume-size 20 --node-count 2 --node-size t3.micro --node-volume-size=15 --image=ami-044415bb13eee2391

Cluster configuration has been created.

Suggestions:
 * list clusters with: kops get cluster
 * edit this cluster with: kops edit cluster kopsclstr.k8s.local
 * edit your node instance group: kops edit ig --name=kopsclstr.k8s.local nodes-eu-west-2b
 * edit your control-plane instance group: kops edit ig --name=kopsclstr.k8s.local control-plane-eu-west-2b

Finally configure your cluster with: kops update cluster --name kopsclstr.k8s.local --yes --admin

[root@ip-10-0-0-28 ~]# kops update cluster --name kopsclstr.k8s.local --yes --admin

Step2:1.Controlplane/master 2 worker node created successfully

Step3:
[root@ip-10-0-0-28 ~]# kubectl get node
NAME                  STATUS   ROLES           AGE    VERSION
i-000360c9c71860aa1   Ready    node            63s    v1.32.4
i-02e7e492877aa8af4   Ready    control-plane   3m9s   v1.32.4
i-0d56c24033208cb64   Ready    node            85s    v1.32.4
Step4: Create the pod using manifest file 
[root@ip-10-0-0-28 ~]# cat manifest.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: mydeamon
spec:
  selector:
    matchLabels:
      app: monitor
  template:    # Moved to top-level under spec
    metadata:
      labels:
        app: monitor  # Must match selector
    spec:
      containers:
      - name: container1
        image: nginx
        ports:
        - containerPort: 80

[root@ip-10-0-0-28 ~]# kubectl create -f manifest.yaml
daemonset.apps/mydeamon created

See here 2 pods created ,We have two worker nodes 
[root@ip-10-0-0-28 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
mydeamon-cnp7z   1/1     Running   0          89s
mydeamon-nkr5t   1/1     Running   0          89s

[root@ip-10-0-0-28 ~]# kubectl get pods -w
NAME             READY   STATUS              RESTARTS   AGE
mydeamon-cnp7z   0/1     ContainerCreating   0          4s
mydeamon-nkr5t   0/1     ContainerCreating   0          4s
mydeamon-cnp7z   1/1     Running             0          8s
mydeamon-nkr5t   1/1     Running             0          8s

Step5: If you want increase worker nodes, need to edit the cluster 
[root@ip-10-0-0-28 ~]# kops get cluster
NAME                    CLOUD   ZONES
kopsclstr.k8s.local     aws     eu-west-2a,eu-west-2b

Suggestions:
 * list clusters with: kops get cluster

This command for worker node to change 
 * edit this cluster with: kops edit cluster kopsclstr.k8s.local
 * edit your node instance group: kops edit ig --name=kopsclstr.k8s.local nodes-eu-west-2b
This command for master to change 
 * edit your control-plane instance group: kops edit ig --name=kopsclstr.k8s.local control-plane-eu-west-2b

Step6: edit the worker node , here each region eu-west-2b 1 node that is the reason showing in cluster min and max showing 2

[root@ip-10-0-0-28 ~]# kops edit ig --name=kopsclstr.k8s.local nodes-eu-west-2b

Changed from 1 to 2
  machineType: t3.micro
  maxSize: 2
  minSize: 2

After change Cluster update the cluster using below command , after run this command instance
will increase.
[root@ip-10-0-0-28 ~]# kops update cluster --name=kopsclstr.k8s.local --yes --admin

Step7: Three worker nodes 
[root@ip-10-0-0-28 ~]# kubectl get node
NAME                  STATUS   ROLES           AGE   VERSION
i-000360c9c71860aa1   Ready    node            15m   v1.32.4
i-02e7e492877aa8af4   Ready    control-plane   17m   v1.32.4
i-05b2188bac81487ec   Ready    node            40s   v1.32.4
i-0d56c24033208cb64   Ready    node            15m   v1.32.4

See here 1 more pod created automatically ,along with workernode
[root@ip-10-0-0-28 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
mydeamon-2tfcp   1/1     Running   0          16m
mydeamon-dzcnl   1/1     Running   0          2m31s
mydeamon-m8p4c   1/1     Running   0          16m

See below each pod attached one node 
[root@ip-10-0-0-28 ~]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE     IP             NODE                  NOMINATED NODE   READINESS GATES
mydeamon-2tfcp   1/1     Running   0          19m     100.96.2.76    i-000360c9c71860aa1   <none>           <none>
mydeamon-dzcnl   1/1     Running   0          5m37s   100.96.3.238   i-05b2188bac81487ec   <none>           <none>
mydeamon-m8p4c   1/1     Running   0          19m     100.96.1.112   i-0d56c24033208cb64   <none>           <none>
[root@ip-10-0-0-28 ~]#
Note: Control plane pod are not create, only pod will create in worker node only

Step8: reduce the worker node automatically pods also got reduced
[root@ip-10-0-0-28 ~]# kops edit ig --name=kopsclstr.k8s.local nodes-eu-west-2b
[root@ip-10-0-0-28 ~]# kops update cluster --name=kopsclstr.k8s.local --yes --admin
 maxSize: 1
 minSize: 1

[root@ip-10-0-0-28 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
mydeamon-dzcnl   1/1     Running   0          10m
mydeamon-m8p4c   1/1     Running   0          23m

[root@ip-10-0-0-28 ~]# kubectl get nodes
NAME                  STATUS   ROLES           AGE   VERSION
i-02e7e492877aa8af4   Ready    control-plane   28m   v1.32.4
i-05b2188bac81487ec   Ready    node            12m   v1.32.4
i-0d56c24033208cb64   Ready    node            27m   v1.32.4

Statefulsets:
Deployment and statefulset 
Deployment is used for application ,statefulset used for database 
For ex: Last night I had a call with mobile customer care, and I explained my issue with her and requested to resolve the issue. She asked to stay on call for some time meanwhile the call was dropped due to network issues.

Again I called to same number,but this time another person pick the call.He don’t know what the conversion we had with the first person. Again I had to repeat the details of my problem to the second person,though it is an inconvenience but it still works out 

Kubernets deployment suit perfectly here.Let’s assumer you deployed your stateless application as deployment with 10 pod replicas running in multiple worker node.

If pod 1 got terminated from node 1,the new pod will gets replaced on either node2 or node1 or anywhere in the cluster

This is possible because,the stateless application pod1 hasn’t stored any data on worker node1, hence can be easily replace by new pod running on a different worker node 2. If the pod is not storing the data then we called it as STATELESS.

Stateless applications:

  • It can’t store the data permanently
  • The word STATELESS means no past data
  •  It depends on non-persistent data means data is removed when Pod,node or cluster is stopped.
  • Non-persistent mainly used for logging info(ex: system log,container log etc)
  • In order to avoid this problem,we are using stateful application
  • A stateless application can be deployed as a set of identical replicas,and each repica can handle incoming requests independently without the need to coordinate with other replicas

For ex:- one worker node has two pods one is application and other one database ,if one pod db deleted
Replicaset it aware to create same node to create delete pod , it going to create another worker node 
and also not past data, so we are not used for stateless application for database.

STATEFUL application:
    Let’s say want to order some some products in amazon, you added some products to your shopping cart. After you added the books to you shopping cart, If forgot to switch off the stove In kitchen and you closed the web browser hurry. after some time you login the browser , the item should be there in cart

Stateful applications are applications that store data and keep tracking it.

Example of stateful applications:

 ALL RDS databases(Mysql,sql)

Elastic search,kafka,mongo DB,redis etc..

Any application that stores data

To get the code for stateful application use this link:

https://github.com/devops0014/ksg-stateful-set-application.git

Application stateless whenevery scalein pod will increase, stateful side whenevery scalein or scaleup previous pod copy and create new pod


Statefulset :First pod is primary pod, remaining pod are secondary pod, primary pod only has read/write permission. remaining pod only read access

Scaledown: in deployment it will delete any pod, but where as stateful set whetevery you have create new pod that is only deleted first.(first in last out)

Deployment :

It will create POD’s with random ID’s

Scale down the Pod’s in random id’s

POD’s are stateless POD’s

We use this for application deployment

Stateful set:

It will create POD’s with sticky ID’s

Scale down the POD’s in reverse order

POD’s are stateful POD’s

We use this for database deployment

Step1:
[root@ip-10-0-0-28 ~]# vi statefulset.yaml
[root@ip-10-0-0-28 ~]# cat statefulset.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mystate
spec:
  replicas: 4
  selector:
    matchLabels:
      app: zamoto
  template:    # Moved to top-level under spec
    metadata:
      labels:
        app: zamoto  # Must match selector
    spec:
      containers:
      - name: container2
        image: shaikmustafa/dm
        ports:
        - containerPort: 80

As you see here , base first pod copy second will create 
[root@ip-10-0-0-28 ~]# kubectl create -f statefulset.yaml
statefulset.apps/mystate created
[root@ip-10-0-0-28 ~]# kubectl get pods -w
NAME             READY   STATUS              RESTARTS   AGE
mydeamon-dzcnl   1/1     Running             0          76m
mydeamon-m8p4c   1/1     Running             0          90m
mystate-0        1/1     Running             0          8s
mystate-1        0/1     ContainerCreating   0          1s
mystate-1        1/1     Running             0          7s
mystate-2        0/1     Pending             0          0s
mystate-2        0/1     Pending             0          0s
mystate-2        0/1     ContainerCreating   0          0s
mystate-2        1/1     Running             0          3s
mystate-3        0/1     Pending             0          0s
mystate-3        0/1     Pending             0          0s
mystate-3        0/1     ContainerCreating   0          0s
mystate-3        1/1     Running             0          3s

Three Pods are created .

[root@ip-10-0-0-28 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
mydeamon-dzcnl   1/1     Running   0          78m
mydeamon-m8p4c   1/1     Running   0          91m
mystate-0        1/1     Running   0          102s
mystate-1        1/1     Running   0          95s
mystate-2        1/1     Running   0          88s
mystate-3        1/1     Running   0          85s

Step2: Scaleup see create mystate4 set,
[root@ip-10-0-0-28 ~]# kubectl scale statefulset mystate  --replicas=5
statefulset.apps/mystate scaled

[root@ip-10-0-0-28 ~]# kubectl scale statefulset mystate  --replicas=5
statefulset.apps/mystate scaled
[root@ip-10-0-0-28 ~]# kubectl get pods -w
NAME             READY   STATUS              RESTARTS   AGE
mydeamon-dzcnl   1/1     Running             0          86m
mydeamon-m8p4c   1/1     Running             0          99m
mystate-0        1/1     Running             0          9m28s
mystate-1        1/1     Running             0          9m21s
mystate-2        1/1     Running             0          9m14s
mystate-3        1/1     Running             0          9m11s
mystate-4        0/1     ContainerCreating   0          2s
mystate-4        1/1     Running             0          3s

[root@ip-10-0-0-28 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
mydeamon-dzcnl   1/1     Running   0          87m
mydeamon-m8p4c   1/1     Running   0          100m
mystate-0        1/1     Running   0          10m
mystate-1        1/1     Running   0          10m
mystate-2        1/1     Running   0          10m
mystate-3        1/1     Running   0          10m
mystate-4        1/1     Running   0          63s
Step2: Scaledown see here last three was terminated, mystate-0 and mystate-1 exist

[root@ip-10-0-0-28 ~]# kubectl scale statefulset mystate  --replicas=2
statefulset.apps/mystate scaled
[root@ip-10-0-0-28 ~]# kubectl get pods -w
NAME             READY   STATUS        RESTARTS   AGE
mydeamon-dzcnl   1/1     Running       0          87m
mydeamon-m8p4c   1/1     Running       0          101m
mystate-0        1/1     Running       0          11m
mystate-1        1/1     Running       0          11m
mystate-2        1/1     Terminating   0          11m
mystate-2        0/1     Completed     0          11m
mystate-2        0/1     Completed     0          11m
mystate-2        0/1     Completed     0          11m
[root@ip-10-0-0-28 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
mydeamon-dzcnl   1/1     Running   0          89m
mydeamon-m8p4c   1/1     Running   0          102m
mystate-0        1/1     Running   0          12m
mystate-1        1/1     Running   0          12m

Step3: So far what we have created resource using all below is listed out 

pods:

[root@ip-10-0-0-28 ~]# kubectl get all
NAME                 READY   STATUS    RESTARTS   AGE
pod/mydeamon-dzcnl   1/1     Running   0          92m
pod/mydeamon-m8p4c   1/1     Running   0          106m
pod/mystate-0        1/1     Running   0          15m
pod/mystate-1        1/1     Running   0          15m

Services:

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   100.64.0.1   <none>        443/TCP   110m

Demon set:

NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/mydeamon   2         2         2       2            2           <none>          106m

NAME                       READY   AGE
statefulset.apps/mystate   2/2     15m


Step3: deleted,service came by default  
[root@ip-10-0-0-28 ~]# kubectl delete ds mydeamon
daemonset.apps "mydeamon" deleted
[root@ip-10-0-0-28 ~]# kubectl delete statefulset mystate
statefulset.apps "mystate" deleted
[root@ip-10-0-0-28 ~]# kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   100.64.0.1   <none>        443/TCP   113m


Namespaces : Particular about name based cluster to display instead of showing all the pod/service/deployment, resources isolated purpose we use namspaces which means sperate one to other

in Production mini two cluster 
Preprod cluster: Dev/test/uat
Prod cluster

so far we create created all resource default cluster namespaces. if you want to create which namespace your resource created.

Below command showing which namespace exists

[root@ip-10-0-0-28 ~]# kubectl get namespace
NAME              STATUS   AGE
default           Active   119m
kube-node-lease   Active   119m
kube-public       Active   119m
kube-system       Active   119m
[root@ip-10-0-0-28 ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   119m
kube-node-lease   Active   119m
kube-public       Active   119m
kube-system       Active   119m

Here ,if you given command kubectl get po if you are in default namespace it will show only default namespace pos, dev namespace pod will not show.



Step1:
If you create any pod, by create by default, in default namespace only see below create one dummy pod 

[root@ip-10-0-0-28 ~]# kubectl run pod-1 --image=nginx
pod/pod-1 created
[root@ip-10-0-0-28 ~]# kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
pod-1   1/1     Running   0          21s
[root@ip-10-0-0-28 ~]# kubectl describe pod-1
error: the server doesn't have a resource type "pod-1"
[root@ip-10-0-0-28 ~]# kubectl describe pod  pod-1
Name:             pod-1
Namespace:        default

kube-public: whatevery the pods, in your cluster able to access every one.

Default:

As the name suggests, whenever we create any kubernetes object wuch as pod,replicas,etc it will create in th default namespace.

Kube-system (whet every you have installed sperate it will use this namespace for ex: metrics)

This namespace contains the kubernetes components such as kube-controller-manager,kube-scheduler,kube-dns or other controllers.

Kube-public

This namespace is used to share non-sensititve information that can be viewed by any  of the members who are part of the kubernetes cluster


[root@ip-10-0-0-28 ~]# kubectl get all -n  kube-system
(any scheduler information it will show)
[root@ip-10-0-0-28 ~]# kubectl get all -n  kube-node-lease
No resources found in kube-node-lease namespace.


Step2: Create own namespace 

[root@ip-10-0-0-28 ~]# kubectl create ns dev
namespace/dev created
[root@ip-10-0-0-28 ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   141m
dev               Active   11s
kube-node-lease   Active   141m
kube-public       Active   141m
kube-system       Active   141m

-- No resource exist for the dev namespace 
[root@ip-10-0-0-28 ~]# kubectl get all -n dev
No resources found in dev namespace.

Create one pod with namespace dev 

[root@ip-10-0-0-28 ~]# kubectl run pod-2 --image=nginx -n dev
pod/pod-2 created
[root@ip-10-0-0-28 ~]# kubectl get all -n  dev
NAME        READY   STATUS    RESTARTS   AGE
pod/pod-2   1/1     Running   0          24s

see below default namespace dev pod is not showing, becauase  it will  create specified dev namespace 

[root@ip-10-0-0-28 ~]# kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
pod-1   1/1     Running   0          17m


Step3: Now planning deploy the specific namespace 

[root@ip-10-0-0-28 ~]# cat statefulset.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myspace
namespace: dev
spec:
  replicas: 4
  selector:
    matchLabels:
      app: zamoto
  template:    # Moved to top-level under spec
    metadata:
      labels:
        app: zamoto  # Must match selector
    spec:
      containers:
      - name: container2
        image: shaikmustafa/dm
        ports:
        - containerPort: 80


[root@ip-10-0-0-28 ~]# kubectl create -f statefulset.yaml
deployment.apps/myspace created

See here default Pods are not showing 

[root@ip-10-0-0-28 ~]# kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
pod-1   1/1     Running   0          25m

with namespace dev 4 replica pod showing 

Deployment created in dev 
[root@ip-10-0-0-28 ~]# kubectl get deployment -n dev
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
myspace   4/4     4            4           2m36s

[root@ip-10-0-0-28 ~]# kubectl get po -n dev
NAME                       READY   STATUS    RESTARTS   AGE
myspace-7bdfc8c57d-5qvdh   1/1     Running   0          25s
myspace-7bdfc8c57d-bvjtt   1/1     Running   0          25s
myspace-7bdfc8c57d-dpjsn   1/1     Running   0          25s
myspace-7bdfc8c57d-vh796   1/1     Running   0          25s
pod-2                      1/1     Running   0          8m59s

using command prompt able give  namespace 

[root@ip-10-0-0-28 ~]# kubectl create -f statefulset.yaml -n dev 

delete the deployment for particular name space dev 

[root@ip-10-0-0-28 ~]# kubectl delete deployment myspace -n dev
deployment.apps "myspace" deleted

--no resource exist
[root@ip-10-0-0-28 ~]# kubectl get deployment
No resources found in default namespace.
-- pods also deleted automatically 
[root@ip-10-0-0-28 ~]# kubectl get pods -n dev
NAME    READY   STATUS    RESTARTS   AGE
pod-2   1/1     Running   0          15m

Step4:

Service also we can create similar way 

[root@ip-10-0-0-28 ~]# vi service.yaml
[root@ip-10-0-0-28 ~]# cat service.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  type: LoadBalancer
  selector:
    app: zamoto
  ports:
    - port: 80
      targetPort: 80

Step5: See here service create dev namespace 

[root@ip-10-0-0-28 ~]# kubectl create -f service.yaml -n dev
service/myservice created
[root@ip-10-0-0-28 ~]# kubectl get svc -n dev
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP                                                             PORT(S)        AGE
myservice   LoadBalancer   100.67.11.194   ac994ded082174dc5ae84c9e924446b1-69473691.eu-west-2.elb.amazonaws.com   80:30364/TCP   17s



Step6: If you want delete the namespace you need first delete the all resource of the namespace 

List of resource in dev namespace

[root@ip-10-0-0-28 ~]# kubectl get all -n dev
NAME                           READY   STATUS    RESTARTS   AGE
pod/myspace-7bdfc8c57d-b8swm   1/1     Running   0          19s
pod/myspace-7bdfc8c57d-brfd8   1/1     Running   0          19s
pod/myspace-7bdfc8c57d-j4tzx   1/1     Running   0          19s
pod/myspace-7bdfc8c57d-swf7j   1/1     Running   0          19s
pod/pod-2                      1/1     Running   0          25m

NAME                TYPE           CLUSTER-IP      EXTERNAL-IP                                                             PORT(S)        AGE
service/myservice   LoadBalancer   100.67.11.194   ac994ded082174dc5ae84c9e924446b1-69473691.eu-west-2.elb.amazonaws.com   80:30364/TCP   4m37s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/myspace   4/4     4            4           19s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/myspace-7bdfc8c57d   4         4         4       19s

[root@ip-10-0-0-28 ~]# kubectl delete deployment myspace -n dev
deployment.apps "myspace" deleted

[root@ip-10-0-0-28 ~]# kubectl delete svc --all  -n dev
service "myservice" deleted

[root@ip-10-0-0-28 ~]# kubectl delete pod --all  -n dev
pod "pod-2" deleted

[root@ip-10-0-0-28 ~]# kubectl get all -n dev
No resources found in dev namespace.

namespace to delete
[root@ip-10-0-0-28 ~]# kubectl delete ns dev
namespace "dev" deleted



Switching one tablespace to another space
[root@ip-10-0-0-28 ~]# kubectl config set-context  --current  --namespace=kube-public
Context "kopsclstr.k8s.local" modified.
[root@ip-10-0-0-28 ~]# kubectl get po
No resources found in kube-public namespace.

[root@ip-10-0-0-28 ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://api-kopsclstr-k8s-local-0kba4s-7dc9e9e90042d909.elb.eu-west-2.amazonaws.com
    tls-server-name: api.internal.kopsclstr.k8s.local
  name: kopsclstr.k8s.local
contexts:
- context:
    cluster: kopsclstr.k8s.local
    namespace: kube-public
    user: kopsclstr.k8s.local
  name: kopsclstr.k8s.local
current-context: kopsclstr.k8s.local
kind: Config
preferences: {}
users:
- name: kopsclstr.k8s.local
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED



[root@ip-10-0-0-28 ~]# kubectl config set-context  --current  --namespace=default
Context "kopsclstr.k8s.local" modified.
[root@ip-10-0-0-28 ~]# kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
pod-1   1/1     Running   0          52m
[root@ip-10-0-0-28 ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://api-kopsclstr-k8s-local-0kba4s-7dc9e9e90042d909.elb.eu-west-2.amazonaws.com
    tls-server-name: api.internal.kopsclstr.k8s.local
  name: kopsclstr.k8s.local
contexts:
- context:
    cluster: kopsclstr.k8s.local
    namespace: default
    user: kopsclstr.k8s.local
  name: kopsclstr.k8s.local
current-context: kopsclstr.k8s.local
kind: Config
preferences: {}
users:
- name: kopsclstr.k8s.local
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED

For reference:


--Thanks 

No comments:

Post a Comment