Tuesday, August 5, 2025

Docker part 4

Docker part 4

High Availability: more than one server

why: if one server got deleted then other server will gives the app

(docker swarm one container copy multiple servers manage the worker node)
DOCKER SWARM:
  • its an orchestration tool for containers. 
  • used to manage multiple containers on multiple servers.
  • here we create a cluster (group of servers).
  • in that cluster, we can create same container on multiple servers.
  • here we have the manager node and worker node.
  • manager node will create & distribute the container to worker nodes.
  • worker node's main purpose is to maintain the container.
  • without docker engine we cant create the cluster.
  • Port: 2377
  • worker node will join on cluster by using a token.
  • manager node will give the token.
manager node  we have create container ,it will distribute worker nodes, worker node to connect token required ,all worker node open the port 2377 then only it will work

Step1: We have create three servers, all servers need to install docker 

Step2: Docker installation

root@dockerhost:~/docker#apt install docker.io -y
root@dockerhost:~/docker# docker --version
Docker version 27.5.1, build 27.5.1-0ubuntu3~24.04.2

Worker1:
root@ip-10-0-2-60:~# docker --version
Docker version 27.5.1, build 27.5.1-0ubuntu3~24.04.2
Worker2:
root@ip-10-0-2-62:~# docker --version
Docker version 27.5.1, build 27.5.1-0ubuntu3~24.04.2

Step2: in manger server run the command Docker swarm , get the token copy the token to all worker node ,it will manage all worker nodes 

root@dockerhost:~/docker# docker swarm init
Swarm initialized: current node (8jt0pszkvuxlj24a0aatlmz8o) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-1b6ovsg5p0s5ceudu18596rj1yrcy3q82xeiilgovwwytzleiz-4x0nxqdzrp2yvzqv02qokiaba 10.0.2.61:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Worker node1 :

root@ip-10-0-2-60:~# docker swarm join --token SWMTKN-1-1b6ovsg5p0s5ceudu18596rj1yrcy3q82xeiilgovwwytzleiz-4x0nxqdzrp2yvzqv02qokiaba 10.0.2.61:2377
This node joined a swarm as a worker.

Worker node2 :

root@ip-10-0-2-62:~# docker swarm join --token SWMTKN-1-1b6ovsg5p0s5ceudu18596rj1yrcy3q82xeiilgovwwytzleiz-4x0nxqdzrp2yvzqv02qokiaba 10.0.2.61:2377
This node joined a swarm as a worker.

After added token in workernode ,automatically active the worker node in manager node 

root@dockerhost:~/docker# docker node ls
ID                            HOSTNAME       STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
8jt0pszkvuxlj24a0aatlmz8o *   dockerhost     Ready     Active         Leader           27.5.1
gpmbxxqougicd8i0aob627ykf     ip-10-0-2-60   Ready     Active                          27.5.1
8flw0oqw6xjzw9ot8y612s8k3     ip-10-0-2-62   Ready     Active                          27.5.1

Step3:

root@dockerhost:~/docker# docker run -itd --name container1 -p 81:80 vakatisubbu/movies:latest
0388dee24ec4b6533b9a533f2c0c8bae13e3d4cee33122e90e5f3d6ccef2da9b



Step4: when every create container in manage server automatically worker node created container
for that you need create first service 

Note: individual containers are not going to replicate.
if we create a service then only containers will be distributed.

SERVICE: it's a way of exposing and managing multiple containers.
in service we can create copy of conatiners.
that container copies will be distributed to all the nodes.

service -- > containers -- > distributed to nodes

step5: remove the existing container 
root@dockerhost:~/docker# docker rm  0388dee24ec4
0388dee24ec4
root@dockerhost:~/docker# docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Step6: Create service 

root@dockerhost:~/docker# docker service create --name movies --replicas 3 -p 81:80 vakatisubbu/movies:latest
zrs0y900dk4pf2cptavq7jv1m
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service zrs0y900dk4pf2cptavq7jv1m converged

Automatically came to all worker nodes 
Worker node 1
root@ip-10-0-2-60:~# docker ps -a
CONTAINER ID   IMAGE                       COMMAND                  CREATED          STATUS          PORTS     NAMES
9ddfec38443f   vakatisubbu/movies:latest   "/usr/sbin/apachectl…"   25 seconds ago   Up 24 seconds             movies.1.zrt2pageb1yawgsu3ikd54q21

Worker node 2
root@ip-10-0-2-62:~# docker ps -a
CONTAINER ID   IMAGE                       COMMAND                  CREATED          STATUS          PORTS     NAMES
882a20decb78   vakatisubbu/movies:latest   "/usr/sbin/apachectl…"   54 seconds ago   Up 53 seconds             movies.2.n3se13cdty5jjtvt8dsw4s1xl

Step6:See now same application running all servers this is called high availability



Step7: docker service commands , it will work in manager node only 

root@dockerhost:~/docker# docker service ls
ID             NAME               MODE         REPLICAS   IMAGE                       PORTS
vis3a3ilm480   gracious_taussig   replicated   0/1        movies:latest
zrs0y900dk4p   movies             replicated   3/3        vakatisubbu/movies:latest   *:81->80/tcp

root@dockerhost:~/docker# docker service inspect movies


root@dockerhost:~/docker# docker service ps  movies
ID             NAME       IMAGE                       NODE           DESIRED STATE   CURRENT STATE           ERROR     PORTS
zrt2pageb1ya   movies.1   vakatisubbu/movies:latest   ip-10-0-2-60   Running         Running 6 minutes ago
n3se13cdty5j   movies.2   vakatisubbu/movies:latest   ip-10-0-2-62   Running         Running 6 minutes ago
t3z17nhvws8c   movies.3   vakatisubbu/movies:latest   dockerhost     Running         Running 6 minutes ago

 For practice :

docker service scale movies=10 : to scale in the containers
docker service scale movies=3 : to scale out the containers
docker service rollback movies : to go previous state
docker service logs movies : to see the logs

If you are remove service automatically container removed in worker nodes also 
docker service rm movies : to delete the services.


Step8:

Manager node :

root@dockerhost:~/docker# docker service create --name movies --replicas 3 -p 81:80 vakatisubbu/movies:latest
khzwkp7bezihktyo844qe7gmo
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service khzwkp7bezihktyo844qe7gmo converged

Step9: swarm is self healing , we will try to kill container in worker2 

root@ip-10-0-2-62:~# docker ps -a
CONTAINER ID   IMAGE                       COMMAND                  CREATED              STATUS              PORTS     NAMES
62928f113c25   vakatisubbu/movies:latest   "/usr/sbin/apachectl…"   About a minute ago   Up About a minute             movies.3.9pr1bum3hjrfw9dpot3m5yq1k
root@ip-10-0-2-62:~# docker kill 62928f113c25
62928f113c25
root@ip-10-0-2-62:~# docker ps -a
CONTAINER ID   IMAGE                       COMMAND                  CREATED         STATUS                       PORTS     NAMES
6c2a01a5299e   vakatisubbu/movies:latest   "/usr/sbin/apachectl…"   5 seconds ago   Created                                movies.3.gszek2kff5ln97tbpsgr0la2m
62928f113c25   vakatisubbu/movies:latest   "/usr/sbin/apachectl…"   2 minutes ago   Exited (137) 5 seconds ago             movies.3.9pr1bum3hjrfw9dpot3m5yq1k

Step10: but application still working 

Step11: if kill also automatically immediately ,manager create new container .that is self healing 



But We will use Kubernates , realtime not using much 

Step12: commands for docker service 

root@dockerhost:~# docker service ls
ID             NAME      MODE         REPLICAS   IMAGE                       PORTS
khzwkp7bezih   movies    replicated   3/3        vakatisubbu/movies:latest   *:81->80/tcp

Node all work node information 

root@dockerhost:~# docker node ls
ID                            HOSTNAME       STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
8jt0pszkvuxlj24a0aatlmz8o *   dockerhost     Ready     Active         Leader           27.5.1
gpmbxxqougicd8i0aob627ykf     ip-10-0-2-60   Ready     Active                          27.5.1
8flw0oqw6xjzw9ot8y612s8k3     ip-10-0-2-62   Ready     Active                          27.5.1

Particular node informration

root@dockerhost:~# docker inspect gpmbxxqougicd8i0aob627ykf
[
    {
        "ID": "gpmbxxqougicd8i0aob627ykf",
        "Version": {


Step13: Need to remove node , first need down the worker node 

worker1 you need entry this command 
root@ip-10-0-2-60:~# docker swarm leave
Node left the swarm.

Manager need check the status of nodes ,See now down 

root@dockerhost:~# docker node ls
ID                            HOSTNAME       STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
8jt0pszkvuxlj24a0aatlmz8o *   dockerhost     Ready     Active         Leader           27.5.1
gpmbxxqougicd8i0aob627ykf     ip-10-0-2-60   Down      Active                          27.5.1
8flw0oqw6xjzw9ot8y612s8k3     ip-10-0-2-62   Ready     Active                          27.5.1


After down you using rm command remove the node 

root@dockerhost:~# docker node rm gpmbxxqougicd8i0aob627ykf
gpmbxxqougicd8i0aob627ykf

Run 'docker node COMMAND --help' for more information on a command.
root@dockerhost:~# docker node ls
ID                            HOSTNAME       STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
8jt0pszkvuxlj24a0aatlmz8o *   dockerhost     Ready     Active         Leader           27.5.1
8flw0oqw6xjzw9ot8y612s8k3     ip-10-0-2-62   Ready     Active                          27.5.1

Step14: if want re add the node , just enter the command join token again in worknode,it will add automatically 

Worker node 
root@ip-10-0-2-60:~# docker swarm join --token SWMTKN-1-1b6ovsg5p0s5ceudu18596rj1yrcy3q82xeiilgovwwytzleiz-4x0nxqdzrp2yvzqv02qokiaba 10.0.2.61:2377
This node joined a swarm as a worker.

Manager node see readded
root@dockerhost:~# docker node ls
ID                            HOSTNAME       STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
8jt0pszkvuxlj24a0aatlmz8o *   dockerhost     Ready     Active         Leader           27.5.1
fbpsc3hjqsh3sntv9g17c88jq     ip-10-0-2-60   Ready     Active                          27.5.1
8flw0oqw6xjzw9ot8y612s8k3     ip-10-0-2-62   Ready     Active                          27.5.1

Purpose: Communicating two servers ,now not using kubertes
there is no autoscaling,load balancing not exist in docker swarm


                                        DOCKER NETWORKING:
Docker networks are used to make communication between the multiple containers that are running on same or different docker hosts. 

We have different types of docker networks.
Bridge Network : SAME HOST
Overlay network : DIFFERENT HOST
Host Network
None network

 
BRIDGE NETWORK: It is a default network that container will communicate with each other within the same host.(Two container communicating each other we use Bridge network same host)

OVERLAY NETWORK: Used to communicate containers with each other across the multiple docker hosts. (Two host servers ,communicate two different containers overlaynetwork)

HOST NETWORK: When you Want your container IP and ec2 instance IP same then you use host network

NONE NETWORK: When you don’t Want The container to get exposed to the world, we use none network. It will not provide any network to our container.


root@dockerhost:~# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
38a9eb42edc7   bridge            bridge    local
bc0d91e098f7   docker_gwbridge   bridge    local
8797cf25d831   host              host      local
5fj7zbvzq7ft   ingress           overlay   swarm
7fbb54dd8455   none              null      local
root@dockerhost:~#

Step1: Creating customize network 

root@dockerhost:~# docker network create vakatisubbu
fefe5c82ecc3952a05fcf2795d4cc9218b206dcdd27531a7338a851c08dfbeaf

See her vakatisubbu own network create 
root@dockerhost:~# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
38a9eb42edc7   bridge            bridge    local
bc0d91e098f7   docker_gwbridge   bridge    local
8797cf25d831   host              host      local
5fj7zbvzq7ft   ingress           overlay   swarm
7fbb54dd8455   none              null      local
fefe5c82ecc3   vakatisubbu       bridge    local

Details about network type , if any  network default network type coming as bridge

root@dockerhost:~# docker service inspect vakatisubbu
[]
Status: Error: no such service: vakatisubbu, Code: 1
root@dockerhost:~# docker network  inspect vakatisubbu    


Step1: To change the bridge network default to custom network

Sample created one container 
root@dockerhost:~# docker run -itd --name container1 ubuntu
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
32f112e3802c: Already exists
Digest: sha256:a08e551cb33850e4740772b38217fc1796a66da2506d312abe51acda354ff061
Status: Downloaded newer image for ubuntu:latest
13f93d4dd337b5dff463281118fff525fab8202e76361ab1725c97fac85ab4ea


root@dockerhost:~# docker ps -a
CONTAINER ID   IMAGE                       COMMAND                  CREATED          STATUS          PORTS     NAMES
13f93d4dd337   ubuntu                      "/bin/bash"              7 seconds ago    Up 6 seconds              container1
2767ee65764a   vakatisubbu/movies:latest   "/usr/sbin/apachectl…"   45 minutes ago   Up 44 minutes             movies.1.ddh2ltx74sct0zogr23dk3vdf
root@dockerhost:~# docker run -itd --name container2 ubuntu
6ecfa2faaef621a7e857b600fef7a34063bdf73e0ab5c4916d2a1a448c795b2a
root@dockerhost:~# docker ps -a
CONTAINER ID   IMAGE                       COMMAND                  CREATED              STATUS          PORTS     NAMES
6ecfa2faaef6   ubuntu                      "/bin/bash"              4 seconds ago        Up 3 seconds              container2
13f93d4dd337   ubuntu                      "/bin/bash"              About a minute ago   Up 59 seconds             container1
2767ee65764a   vakatisubbu/movies:latest   "/usr/sbin/apachectl…"   45 minutes ago       Up 45 minutes             movies.1.ddh2ltx74sct0zogr23dk3vdf


Step2:  
See by  default network type Bridge, adding this into vakatisubbu bridge

root@dockerhost:~# docker inspect 6ecfa2faaef6
"NetworkSettings": {
            "Bridge": "",
            "SandboxID": "4dda7e27fff8832bdfc3700dc2d0ab388e90a3b136d7ba1869470e791eeb4dcc",
            "SandboxKey": "/var/run/docker/netns/4dda7e27fff8",
            "Ports": {},

root@dockerhost:~# docker network connect vakatisubbu container1

See now container added to network 
root@dockerhost:~# docker network inspect  vakatisubbu

  "ConfigOnly": false,
        "Containers": {
            "13f93d4dd337b5dff463281118fff525fab8202e76361ab1725c97fac85ab4ea": {
                "Name": "container1",
                "EndpointID": "75fcb0f196865f7b9de577c9dff039777f9b829f802c0fbe71fc04bd57f2c242",
                "MacAddress": "02:42:ac:13:00:02",
                "IPv4Address": "172.19.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Step3: Now two container in same network
root@dockerhost:~# docker network connect vakatisubbu container2
root@dockerhost:~# docker network inspect  vakatisubbu
   "ConfigOnly": false,
        "Containers": {
            "13f93d4dd337b5dff463281118fff525fab8202e76361ab1725c97fac85ab4ea": {
                "Name": "container1",
                "EndpointID": "75fcb0f196865f7b9de577c9dff039777f9b829f802c0fbe71fc04bd57f2c242",
                "MacAddress": "02:42:ac:13:00:02",
                "IPv4Address": "172.19.0.2/16",
                "IPv6Address": ""
            },
            "6ecfa2faaef621a7e857b600fef7a34063bdf73e0ab5c4916d2a1a448c795b2a": {
                "Name": "container2",
                "EndpointID": "18ad6846b64edeb8ae78b5943a18a2f48de7886e41331118f314cdbdfd878e2d",
                "MacAddress": "02:42:ac:13:00:03",
                "IPv4Address": "172.19.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Step4: Jenkins installation in docker , software get form the docker register hub 

INSTALLING MULTIPLE TOOLS 

docker run -it --name jenkins -p 8080:8080 jenkins/jenkins:lts
docker run -it --name prometheus-container -e TZ=UTC -p 9090:9090 prom/prometheus
docker run -d --name grafana -p 3000:3000 grafana/grafana



Below is the Docker architecture :
We have three component docker client ,docker host and docker registry 
docker client(docker pull,docker push...), we will ask image client , if image not exist ,it will ask host ,if host not exist it will get the registry for the image os 

Daemon is control all containers ,services and images






DATABASE SETUP 

docker run -itd --name dbcont -e MYSQL_ROOT_PASSWORD=test123 mysql:8.0
docker exec -it dbcont bash
mysql -u root -p

--Database installed successfully.

bash-5.1# mysql --version
mysql  Ver 8.0.43 for Linux on x86_64 (MySQL Community Server - GPL)


--Thanks 



No comments:

Post a Comment