Intro and prerequisites
       Docker Swarm is an orchestration tool for Docker containers that allows managing multiple nodes as a single cluster. Every swarm node is a physical or a virtual machine running a docker host. Docker swarm in conjunction with docker service offers a wide range of options to scale up and down containers across cluster nodes and provides the ability for containers to recover from failures and remain resilient. Imagine how easy is to scale a service-oriented application using a single command:
docker service scale OrderingMicroservice=2 CatalogMicroservice=7 BasketMicroservice=3
This tutorial is assuming you are using a Windows 10 Pro, also there is necessary to ensure the following prerequisites:
  • Docker Desktop for Windows, it is our docker host, it supports running both Linux and Windows Docker containers, switch it to work with Linux containers.
  • Hyper-V and Containers Windows features. They will be automatically enabled by Docker Desktop for Windows installer, so no need for any extra actions.
  • Docker Toolbox for Windows, required for creating Linux virtual machines that will be supervised by the Hyper-V Manager
  • Use Hyper-V Manager to set up a new external virtual switch, give it a name like ExtVirtualSwitch.

Creating Linux virtual machines
docker swarm diagram
Docker services will be deployed in 3 Linux virtual machines created using docker-machine tool that comes within already installed Docker Toolbox. Open powershell as administrator and write the following command(note that parameter --hyperv-virtual-switch is using the previously created virtual network switch):
docker-machine create --driver hyperv --hyperv-virtual-switch "ExtVirtualSwitch"  Manager
docker-machine create
The machine's custom name is Manager, it can be viewed by executing docker-machine ls command, note that the swarm column is empty indicating that the machine is not yet a swarm member. Later this VM will be used as a swarm leader. Now create another two machines that will play the role of swarm worker, just use the previous command but with different machine names(if you have a small amount of RAM than end up with the creation of the first worker machine since it is enough to understand the swarm concepts)
docker-machine create --driver hyperv --hyperv-virtual-switch "ExtVirtualSwitch"  Worker1
docker-machine create --driver hyperv --hyperv-virtual-switch "ExtVirtualSwitch"  Worker2
The command docker-machine ls is listing all three VMs.
three docker machines
Also, the machines will be visible from Hyper-V Manager
Hyper-V Manager three LinuxVMs

Creating and joining the swarm cluster
So far, three new tiny LinuxVMs have been created and now it is possible to connect to them through console terminals to create a docker swarm. First, let's initialize a swarm from the Manager VM terminal by following the next steps:
  • open new(second one) powershell console
  • optionally rename the console title to be able to distinct it easier, the command is: $Host.UI.RawUI.WindowTitle = "Swarm Manager"
  • connect to the Manager VM by executing the command:
    & "C:\Program Files\Docker Toolbox\docker-machine.exe" env --shell powershell Manager | Invoke-Expression
  • finally, initialize the swarm using docker swarm init command and optionally run the docker node ls command to view our first swarm node which is the swarm leader
docker swarm init
docker swarm init command has generated a command text that is used for joining the swarm, this command is surrounded by the green rectangle and it contains a token secret. Now the Worker1 VM can join the swarm and become a swarm node, for this, it is necessary to connect to the Worker1 terminal using powershell and execute the join command(check the text from the green rectangle). The whole process is explained in the steps below:
  • open new(third one) powershell console
  • optionally rename the console title to be able to distinct it easier, the command is: $Host.UI.RawUI.WindowTitle = "Swarm Worker1"
  • connect to the Worker1 terminal by executing the command:
    & "C:\Program Files\Docker Toolbox\docker-machine.exe" env --shell powershell Worker1 | Invoke-Expression
  • you are inside Worker1 terminal, now join the swarm:
    docker swarm join --token SWMTKN-1-5nmw5u2jo9gbasovkcniee1kk9tctqvjcgth7rvi2t8enp8h43-b5u2vek2b8oeelfrlkhwqnihc
    At your side, the token secret and IP of the docker desktop host will differ from those displayed in this example.
At the moment the swarm contains one leader and one worker, to join the second worker to the swarm: open new console terminal(the fourth one) and repeat the previous steps, but replace the Worker1 string with Worker2 string inside the command text. In other words, any machine that is aiming to connect the swarm will use the same unique join token secret. Currently, it is possible to view all swarm nodes by executing the docker node ls command, and this command can be executed only from the terminal of the swarm leader.
manager docker node ls command

Scaling containers with docker service
Since the swarm was created and it contains 3 nodes, it's time to run for some containers within docker service, the service containers will be deployed and will live inside those 3 nodes of the swarm. For this purpose, it was chosen a small containerized application named busybox. Of course, you can create your own containerized application, for example a .Net core console, but be aware that docker service will keep recreating the container every time it fails or it stops executing, that's why your application has to be a long running task in order to prevent to exit quickly, that's why the busybox application will be displaying current datetime in a trivial loop in order to simulate a continuous task. Go to the leader console terminal and create the service with three containers instances by putting down the following command:
docker service create --replicas 3 --name SwarmService  busybox:latest sh -c "while true; do date; sleep 3; done"
deploy docker service
The custom service name is SwarmService, since there are 3 swarm nodes and 3 container instances: each container will be running inside one node, note that the leader also is running one container the same way as every worker node does. It is possible to check whether the node container is running as expected, go to any swarm node terminal and execute the following command: docker logs -f be0c4fa8484b, just replace the container id be0c4fa8484b with the corresponding one(use the docker container ps command to get the list of running containers), bellow is the result:
view container logs
If there is a need for 7 container instances(in other words, there is a need for 4 more instances) run the docker service scale SwarmService=7 command, and docker service will instantiate and distribute 4 new containers across the previously created 3 nodes, for example 2 containers will run inside the first node(Manager), another 3 containers inside the second node(Worker1) and 2 containers inside the third(Worker2) node, thus the leader is also a worker node.
docker service scale.png
You can also try to stop any container on any node by using docker stop container-id command, but the docker service will recreate it just in a few seconds. As well if any worker will leave the swarm cluster(by using docker swarm leave command), docker service will equally redistribute its containers to remaining swarm nodes and the service anyway will contain 7 running containers as it was initially desired. You can play around a bit with all 3 nodes terminals to experiment by scaling up and down the service, stopping containers, joining and leaving the swarm nodes in order to get a more hands-on experience of orchestrating the docker services. Lately, don't forget to remove the docker service(docker service rm SwarmService) and remove all 3 VMs with the docker-machine rm command.