With developers using Docker more and more to deploy applications, administrators need to manage more and more containers, especially ones used for large-scale web applications. Orchestrating these applications calls for a multi-host approach. Recently Docker Inc. introduced Swarm - a tool that can manage the distribution and orchestration of these applications across several machines. Community-based alternatives to the official Docker solution, such as Shipyard, CoreOS or even Mesos/Marathon, can also be used.
This how-to will show you how to use Swarm to set up an infrastructure made up of a master server and 3 node servers.

Generating security certificates and configuring iptables

Swarm enables you to manage a cluster of Docker hosts. It ships the standard Docker APIs scaled to the cluster and allows you to manage task scheduling and allocate resources per container within a pool of Docker hosts. This is where it gets interesting: Swarm lets you manage your pool as if it were a single Docker host.

We will firstly generate keys and certificates that will allow us to log in to our various servers, and use TLS to secure communications between our machines to prevent any non-authorised people getting control of our nodes. Only servers with a certificate will be able to connect to the Docker clients on our remote servers.

To do this, follow the steps in the official Docker documentation: https://docs.docker.com/articles/https/

Once you've generated the certificates, let's move on to the client servers (nodes). In our example, our cluster will contain 3 servers (IP: 1.1.1.1, 2.2.2.2 and 3.3.3.3).

Copy the keys and certificates generated in the /etc/docker/certs/ folder to your nodes with the following command:

scp ca.pem server-cert.pem server-key.pem user@1.1.1.1:/etc/docker/certs/
scp ca.pem server-cert.pem server-key.pem user@2.2.2.2:/etc/docker/certs/
scp ca.pem server-cert.pem server-key.pem user@3.3.3.3:/etc/docker/certs/

To increase the security at cluster level, we will set up iptable rules on our master server and our nodes.
Below is an example of the iptable rules that can be applied:

On the nodes:

# Keep established connections
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

# Authorise loopback
iptables -t filter -A INPUT -i lo -j ACCEPT

# ICMP (Ping)
iptables -t filter -A INPUT -p icmp -j ACCEPT

# SSH In
iptables -t filter -A INPUT -s 4.4.4.4 -p tcp --dport 22 -j ACCEPT

# /!\ ATTENTION: make sure you enter the correct IP address at this level. It should be the connection IP or the IP address of the master server, for example. These will be the only IPs able to connect via SSH to the nodes.

# HTTP In
iptables -t filter -A INPUT -p tcp --dport 80 -j ACCEPT

# SSL In
iptables -t filter -A INPUT -p tcp --dport 443 -j ACCEPT

# Docker In
iptables -t filter -A INPUT -s 4.4.4.4 -p tcp --dport 2375 -j ACCEPT

# /!\ ATTENTION: make sure you enter the correct IP address at this level. It should be the IP address of the master server, which will be the only IP able to connect to port 2375.

# Prevent all incoming connections
iptables -P INPUT DROP
iptables -P FORWARD DROP

Once the rules have been defined, we need to save them so that they launch on startup via:

apt-get install iptables-persistent

And select "yes" when asked if you need to save IPv4 iptables.

/!\ ATTENTION: we advise you to check that the rules are working properly before saving them. If you need to reset any unsaved rules, simply reboot your servers.

You can add iptable rules at any time and save them via:

iptables-persistent save

On the master server:

Let's install the iptable rules:

# Keep established connections
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

# Authorise loopback
iptables -t filter -A INPUT -i lo -j ACCEPT

# ICMP (Ping)
iptables -t filter -A INPUT -p icmp -j ACCEPT

# SSH In
iptables -t filter -A INPUT -s "The public IP of your connection or VPN" -p tcp --dport 22 -j ACCEPT

# /!\ ATTENTION: make sure you enter the correct IP address at this level. It should be the IP of your connection of VPN, for example. These will be the only IPs which can connect to the master node.

# Prevent all incoming connections
iptables -t filter -P INPUT DROP
iptables -t filter -P FORWARD DROP

Once the rules have been defined, we need to save them so that they launch on startup via:

apt-get install iptables-persistent

And select "yes" when asked if you need to save IPv4 iptables.

/!\ ATTENTION: we advise you to check that the rules are working properly before saving them. If you need to reset any unsaved rules, simply reboot your servers.

You can add iptable rules at any time and save them via:

iptables-persistent save

Setting up nodes

Once Docker is installed on our 3 servers (see our earlier guide on optimising a VPS with Docker), we can also get RunAbove instances on a Docker host. For more information, see this guide.
We're going bind the Docker daemon to a port, e.g. port 2375 - the official port assigned by the IANA, so that these servers can communicate with the master server. Start by stopping Docker on every one of our servers with this command:

service docker stop

Then open the folder that contains our certificates:

cd /etc/docker/certs

Then, to launch Docker, we will communicate to port 2375 with TLS authentication by running the below command on every one of our nodes:

docker -d --tlsverify --tlscacert=/etc/docker/certs/ca.pem --tlscert=/etc/docker/certs/server-cert.pem --tlskey=/etc/docker/certs/server-key.pem -H=0.0.0.0:2375 --label name=node1

docker -d --tlsverify --tlscacert=/etc/docker/certs/ca.pem --tlscert=/etc/docker/certs/server-cert.pem --tlskey=/etc/docker/certs/server-key.pem -H=0.0.0.0:2375 --label name=node2

docker -d --tlsverify --tlscacert=/etc/docker/certs/ca.pem --tlscert=/etc/docker/certs/server-cert.pem --tlskey=/etc/docker/certs/server-key.pem -H=0.0.0.0:2375 --label name=node3

/!\ ATTENTION: these are not persistant options. If the machine reboots, you will need to relaunch Docker with these options. The best thing would be to define them in $DOCKER_OPTS. In Ubuntu, they can be easily defined in /etc/default/docker

Here "--label name=node1" will allow us to give each of our nodes with a unique label. This will allow us to launch a Docker container on a particular node with a set of constraints.

We can now declare the cluster with Swarm. We will firstly need to generate a dicovery token (a unique ID) for our cluster by running the following command:

docker pull swarm:latest
docker run --rm swarm create
→ e71cc9b24e036030bb76d0a0bd072c2f

The discovery token is automatically generated in the public Docker Hub, but there are alternative methods we can use to obtain one if we don't want to use the Hub (if we want everything private, for example). For more information on this subject, refer to the following documentation:
https://github.com/docker/swarm/tree/master/discovery#hosted-discovery-with-docker-hub

From the same machine, we will start up the Swarm agent on all our nodes. We will integrate the nodes into the cluster, using the token we obtained, via this command:

docker --tls --tlscacert=/etc/docker/certs/ca.pem --tlscert=/etc/docker/certs/client-cert.pem --tlskey=/etc/docker/certs/client-key.pem -H=tcp://1.1.1.1:2375 run -d swarm join --addr=1.1.1.1:2375 token://e71cc9b24e036030bb76d0a0bd072c2f

docker --tls --tlscacert=/etc/docker/certs/ca.pem --tlscert=/etc/docker/certs/client-cert.pem --tlskey=/etc/docker/certs/client-key.pem -H=tcp://2.2.2.2:2375 run -d swarm join --addr=2.2.2.2:2375 token://e71cc9b24e036030bb76d0a0bd072c2f

docker --tls --tlscacert=/etc/docker/certs/ca.pem --tlscert=/etc/docker/certs/client-cert.pem --tlskey=/etc/docker/certs/client-key.pem -H=tcp://3.3.3.3:2375 run -d swarm join --addr=3.3.3.3:2375 token://e71cc9b24e036030bb76d0a0bd072c2f

Our nodes are now integrated into the cluster and ready to communicate with the master node.

Setting up the master node

Once we've installed Docker on the master node (IP: 4.4.4.4), we can manage our cluster with it:

docker run -v /etc/docker/certs/:/home/ --name swarm -d swarm manage --tls --tlscacert=/home/ca.pem --tlscert=/home/client-cert.pem --tlskey=/home/client-key.pem token://e71cc9b24e036030bb76d0a0bd072c2f

"-v /etc/docker/certs/:/home/" will allow us to create a common repository between the host server (the master server) and the Swarm container that runs on the same machine, so that the container can access the previously generated certificates.

Here Swarm will only be accessible locally on port 2375 (identical to the communication port used by our nodes) and will manage the cluster linked to the e71cc9b24e036030bb76d0a0bd072c2f token, that we created earlier.

Once this operation has been performed, we can list the machines attached to our cluster via this command:

docker run --rm swarm list token://e71cc9b24e036030bb76d0a0bd072c2f
→ 1.1.1.1:2375
→ 2.2.2.2:2375
→ 3.3.3.3:2375

We can also check that Swarm is working correctly. To do this, we need to create an alias which will allow us to retrieve our Swarm conatainer's IP:

alias dockip="docker inspect --format '{{ .NetworkSettings.IPAddress }}'"

Now let's run this command:

docker --tls -H=tcp://$(dockip swarm):2375 info

This command should return the list of nodes and the number of containers on each one of them.

To simplify the command, we can once again create an alias:

alias swarmdock="docker --tls -H=tcp://$(dockip swarm):2375"

Then we run:

swarmdock info

To make these aliases permanent, you can create a .bash_aliases file in your personal folder.
You can read the official documentation to do this: http://doc.ubuntu-fr.org/alias

To list all the containers launched in our cluster, we can therefore run the following command:

swarmdock ps

We can also list the individual containers launched on each node. To do this we can use the below commands:

docker --tls --tlscacert=/etc/docker/certs/ca.pem --tlscert=/etc/docker/certs/client-cert.pem --tlskey=/etc/docker/certs/client-key.pem -H=tcp://1.1.1.1:2375 ps

docker --tls --tlscacert=/etc/docker/certs/ca.pem --tlscert=/etc/docker/certs/client-cert.pem --tlskey=/etc/docker/certs/client-key.pem -H=tcp://2.2.2.2:2375 ps

docker --tls --tlscacert=/etc/docker/certs/ca.pem --tlscert=/etc/docker/certs/client-cert.pem --tlskey=/etc/docker/certs/client-key.pem -H=tcp://3.3.3.3:2375 ps

All the usual Docker commands (docker images, docker run, docker inspect...) can therefore be run on each individual node.

Adding new containers to the pool with Swarm

We can now add containers to our nodes using our master host.

Example → launching a MySQL image on one of the nodes in our pool:

docker --tls -H=tcp://$(dockip swarm):2375 run -e MYSQL_ROOT_PASSWORD=yourpassword --name db -v /home/mysql/:/var/lib/mysql/ -d mysql:latest

Or via the alias:

swarmdock run -e MYSQL_ROOT_PASSWORD=votremotdepasse --name db -v /home/mysql/:/var/lib/mysql/ -d mysql:latest

Let's now launch "docker --tls -H=tcp://$(dockip swarm):2375 ps" or "swarmdock ps" to make sure that our container is running correctly on one of our nodes:

cc4fbda67cf4 mysql:latest "/entrypoint.sh mysq 29 seconds ago Up Less than a second 3306/tcp swarmjo/db

Here, we will use a Bin Packing algorithm to allocate the containers. This means that the machines will fill up one after another (once the first one is full, we will move on to the next one).
We can use personalised labels for more specific constraints. Moreover, it's possible to manage affinity rules, for example to tell the system to add one image in particular to a specific host, by using a set of constraints. For example - if you want to add a MySQL container to my node named "node3":

swarmdock run -e MYSQL_ROOT_PASSWORD=votremotdepasse -e constraint:name==node3 --name bdd2 -v /home/mysql/:/var/lib/mysql/ -d mysql:latest

In this example, we specify the constraint "name==node3", which will then add the container to this machine (3.3.3.3) only.

Also the volume /home/mysql will be local on node3, so if we need to create the container, we need to make sure we create it on the correct node.

If we want to start up WordPress on the same machine:

swarmdock run --name wp1 --link bdd2:mysql -e constraint:name==node3 -v /home/wp/:/var/www/html/ -p 80:80 -d wordpress:latest

Examples of available constraints:

constraint:name!=node1 deploy the container on all the nodes, apart from node1
constraint:name==/node[12]/ deploy the container on node 1 and node 2.
constraint:name!=/node[12]/ deploy the container on all the nodes, apart from node 1 and node 2.

For more information on constraints, refer to the official Docker documentation.

You now have an orchestration system for your Docker hosts, and you can easily and quickly add applications/containers to your cluster. Docker Inc. is currently working on integrating Docker Compose(1) into Swarm to increase performance of large-scale container deployment. On top of the above services, Docker Inc. also announced that it will add Docker Hub Enterprise (DHE) to the public Docker Hub, which is hosted and managed by Docker Inc. The Docker revolution is continuing to move forward. If you want to get involved and test it out for yourself, check out Sailabove (in alpha). We've also just put in place an Ubuntu 14.04 + Docker (pre-installed) image on VPS Cloud & Classic (available on all our VPSs except VPS Classic 1, which doesn't have enough disk space). This will enable you to directly obtain container-ready servers and benefit from the inherent advantages of the OVH VPS.

(1)Docker Compose enables developers to assemble autonomous and interoperable Docker containers with a simple YAML configuration file containing the definition of each container. For more information, refer to the official Docker documentation.