Docker Swarm
Docker includes swarm mode for natively managing a cluster of called a swarm. The Docker CLI can be used create a swarm, deploy application services to a swarm, and manage swarm behavior — without using any additional orchestration software. Details on how swarm mode works are available here.
This tutorial uses to create multiple nodes on your desktop. These nodes can even be on multiple machines on the cloud platform of your choice.
- Docker Engine 1.12 or later installed using Docker for Linux.
- .
macOS
Docker Engine 1.12 or later installed using . Docker Machine is already included with Docker for Mac.
VirtualBox 5.2 or later for creating the swarm nodes.
Docker Engine 1.12 or later installed using Docker for Windows. Docker Machine is already included with Docker for Windows.
for creating the swarm nodes.
1. Create swarm nodes
Following bash script is a simpler form of Docker’s own swarm beginner tutorial . You can use this for Linux and macOS. If you are using Windows, then download and change the powershell Hyper-V version of the same script.
- The script first instantiates 3 using Docker Machine and VirtualBox. Thereafter, it initializes the swarm cluster by creating a swarm manager on the first node. Finally, it adds the remaining nodes as to the cluster. It also pulls the yugabytedb/yugabyte container image into each of the nodes to expedite the next steps.
NoteIn more fault-tolerant setups, there will be multiple manager nodes and they will be independent of the worker nodes. A 3-node master and 3-node worker setup is used in the Docker tutorial script referenced above.
- Review all the nodes created.
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
worker1 - virtualbox Running tcp://192.168.99.v1.0:2376 v18.05.0-ce
worker2 - virtualbox Running tcp://192.168.99.101:2376 v18.05.0-ce
worker3 - virtualbox Running tcp://192.168.99.102:2376 v18.05.0-ce
2. Create overlay network
- SSH into the worker1 node where the swarm manager is running.
$ docker-machine ssh worker1
- Create an that the swarm services can use to communicate with each other. The
attachable
option allows standalone containers to connect to swarm services on the network.
You can do this as shown below.
$ docker network create --driver overlay --attachable yugabytedb
- Create 3 yb-master
replicated
services each with replicas set to 1. This is the in Docker Swarm today to get stable network identies for each of yb-master containers that we will need to provide as input for creating the yb-tserver service in the next step.
Note for Kubernetes UsersDocker Swarm lacks an equivalent of Kubernetes StatefulSets. The concept of replicated services is similar to .
$ docker service create \
--replicas 1 \
--name yb-master1 \
--network yugabytedb \
--mount type=volume,source=yb-master1,target=/mnt/data0 \
--publish 7000:7000 \
yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-master \
--fs_data_dirs=/mnt/data0 \
--master_addresses=yb-master1:7100,yb-master2:7100,yb-master3:7100 \
$ docker service create \
--replicas 1 \
--name yb-master2 \
--network yugabytedb \
--mount type=volume,source=yb-master2,target=/mnt/data0 \
yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-master \
--fs_data_dirs=/mnt/data0 \
--master_addresses=yb-master1:7100,yb-master2:7100,yb-master3:7100 \
--replication_factor=3
$ docker service create \
--replicas 1 \
--network yugabytedb \
--mount type=volume,source=yb-master3,target=/mnt/data0 \
yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-master \
--fs_data_dirs=/mnt/data0 \
--master_addresses=yb-master1:7100,yb-master2:7100,yb-master3:7100 \
--replication_factor=3
- Run the command below to see the services that are now live.
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
jfnrqfvnrc5b yb-master1 replicated 1/1 yugabytedb/yugabyte:latest *:7000->7000/tcp
kqp6eju3kq88 yb-master2 replicated 1/1 yugabytedb/yugabyte:latest
ah6wfodd4noh yb-master3 replicated 1/1 yugabytedb/yugabyte:latest
- View the yb-master admin UI by going to the port 7000 of any node, courtesy of the publish option used when yb-master1 was created. For e.g., we can see from Step 1 that worker2’s IP address is 192.168.99.101. So, http://192.168.99.101:7000 takes us to the yb-master admin UI.
4. Create yb-tserver service
- Create a single yb-tserver
global
service so that swarm can then automatically spawn 1 container/task on each worker node. Each time we add a node to the swarm, the swarm orchestrator creates a task and the scheduler assigns the task to the new node.
Note for Kubernetes UsersThe global services concept in Docker Swarm is similar to .
TipUse remote volumes instead of local volumes (used above) when you want to scale-out or scale-in your swarm cluster.
- Run the command below to see the services that are now live.
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
jfnrqfvnrc5b yb-master1 replicated 1/1 yugabytedb/yugabyte:latest *:7000->7000/tcp
kqp6eju3kq88 yb-master2 replicated 1/1 yugabytedb/yugabyte:latest
ah6wfodd4noh yb-master3 replicated 1/1 yugabytedb/yugabyte:latest
n6padh2oqjk7 yb-tserver global 3/3 yugabytedb/yugabyte:latest *:9000->9000/tcp
- Now we can go to http://192.168.99.101:9000 to see the yb-tserver admin UI.
5. Test the client APIs
YCQL API
Connect to that container using that container ID.
$ docker exec -it <ybtserver_container_id> /home/yugabyte/bin/cqlsh
Connected to local cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.9-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh>
- Follow the test instructions as noted in .
Initialize the YEDIS API.
You can do this as shown below.
$ docker exec -it <ybmaster_container_id> /home/yugabyte/bin/yb-admin -- --master_addresses yb-master1:7100,yb-master2:7100,yb-master3:7100 setup_redis_table
...
I0515 19:54:48.952378 39 client.cc:1208] Created table system_redis.redis of type REDIS_TABLE_TYPE
I0515 19:54:48.953572 39 yb-admin_client.cc:440] Table 'system_redis.redis' created.
- Follow the test instructions as noted in Quick Start.
PostgreSQL API
- Install the client in the yb-tserver container.
$ docker exec -it <ybtserver_container_id> yum install postgresql
- Connect to the psql client in yb-tserver.
$ docker exec -it <ybtserver_container_id> psql -h localhost --port 5433
...
psql (9.2.23, server 0.0.0)
WARNING: psql version 9.2, server version 0.0.
Some psql features might not work.
Type "help" for help.
root=>
- Follow the test instructions as noted in Quick Start.
Docker Swarm ensures that the yb-tserver global
service will always have 1 yb-tserver container running on every node. If the yb-tserver container on any node dies, then Docker Swarm will bring it back on.
Observe the output of docker ps
every few seconds till you see that the yb-tserver container is re-spawned by Docker Swarm.
7. Test auto-scaling with node addition
- On the host machine, get worker token for new worker nodes to use to join the existing swarm.
$ docker-machine ssh worker1 "docker swarm join-token worker -q"
SWMTKN-1-aadasdsadas-2ja2q2esqsivlfx2ygi8u62yq
- Create a new node
worker4
.
$ docker-machine create -d virtualbox worker4
- Pull the YugabyteDB container.
$ docker-machine ssh worker4 "docker pull yugabytedb/yugabyte"
- Join worker4 with existing swarm.
$ docker-machine ssh worker4 \
"docker swarm join \
--token SWMTKN-1-aadasdsadas-2ja2q2esqsivlfx2ygi8u62yq \
--listen-addr $(docker-machine ip worker4) \
--advertise-addr $(docker-machine ip worker4) \
$(docker-machine ip worker1)"
- Observe that Docker Swarm adds a new yb-tserver instance to the newly added
worker4
node and changes its replica status from 3 / 3 to 4 / 4.
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
jfnrqfvnrc5b yb-master1 replicated 1/1 yugabytedb/yugabyte:latest *:7000->7000/tcp
kqp6eju3kq88 yb-master2 replicated 1/1 yugabytedb/yugabyte:latest
ah6wfodd4noh yb-master3 replicated 1/1 yugabytedb/yugabyte:latest
n6padh2oqjk7 yb-tserver global 4/4 yugabytedb/yugabyte:latest *:9000->9000/tcp
8. Remove services and destroy nodes
- Stop the machines.
- Remove the machines.