The OpenAppStack project involves the installation, configuration and management of containerised applications and services, on a cluster of connected virtual private servers.
Docker Swarm and Kubernetes are two major tools that help to automate those tasks. For OpenAppStack we wanted to use one of these platforms; this article explains how we made our choice.
Starting from our list of requirements, we listed their respective advantages and disadvantages as we found them while experimenting with both contenders for a few weeks.
Setting up and updating passwords
Docker Swarm provides mechanisms for storing, distributing and injecting configuration (Docker Config) and passwords (Docker Secrets). These two are essentially the same, only Secrets are better protected (only mounted in memory, etc.); as far as we know there is no reason to prefer Configs over Secrets for any use.
Secrets are injected into the container by providing them as the content of files, mounted by default at
/run/secrets/name-of-secret. Some docker images (such as the standard MariaDB one) have been extended to allow relevant passwords to be loaded from a raw file like this, instead of an environment variable or as part of a configuration file.
Updating passwords is tricky, because Docker Secrets are immutable. You can still change a password by storing it in a new secret and mounting that at the same filepath as the old one. This requires some boilerplate commands, though we can integrate those in the code that sets up and maintains the app instance. The immutability is presented as a feature, because it makes it possible to rollback to an earlier state, for example when an update fails.
Passwords can be added to Kubernetes by setting “secrets”. Secrets can be mounted as a volume in a container, or used by a “kubelet” (container manager) when setting up a “pod” (one or more containers linked together, e.g., an Etherpad instance with an Etherpad container and a MariaDB container).
Kubernetes secrets need to be created before creating a pod that depends on them. This is the same as in Docker Swarm and can be worked around by “versioning” your secrets.
Automatically updating application containers
Docker Swarm updates application containers for you: when you redeploy a running swarm service, it will replace running containers when changes to the docker image or configuration require it.
It can also do rolling updates, so updates to replicated services are not all done at the same time. This implicitly requires that the application supports this (i.e., can be run multiple times concurrently without resulting in data corruption), but that holds for any orchestration tool.
It is possible to update containers in Kubernetes to a specific version by typing
kubectl set image <name> <name>=<container URI>. This can also be applied without downtime if you configured your pod correctly. If the update fails, it is easy to rollback with
kubectl rollout undo. This depends, however, on how and where your update fails of course: container software should not have corrupted your data in the meantime.
Docker Swarm will restart your container when it exits unexpectedly. You can configure how many times it should retry a failed container and how long it should wait between tries.
Kubernetes also detects various faults. Apart from restarting failed pods, it is possible to halt an updating process when the updated pod does not start as well. Well defined services will usually be able to “garbage collect” old pods that did not start correctly. It is possible to check whether a pod (or actually a deployment) started correctly with customisable commands (
Generating custom configuration files
The configuration of OpenAppStack apps will be done as much as possible by the OpenAppStack developers, using sane and secure settings to provide a good default installation. However, some configuration will inevitably be have to be provided by the person setting up the app instance, like the external domain name of the instance.
That means that some templating will have to be done by the process that sets up the application instance, merging the user-provided settings with the mostly-finished application configuration file as provided by the OpenAppStack maintainers.
Docker Swarm provides no facility for this, as far as we could find. We implemented a small proof-of-concept tool to fill this hole, using Jinja2 templating. This has the advantage that Jinja2 templates are quite common, being used by Ansible. Still it would be unfortunate that using Docker Swarm, we would have to write a substantial amount of configuration management code ourselves.
Custom configuration files are not built into Kubernetes. However, a tool has been developed to relieve this and other shortcomings, called “Helm”. Helm serves to automate big parts of the boilerplate Kubernetes configuration, using a templating system based on Go templates.
This templating system can, among other things, be used to create application configuration files that can be added to the cluster as a so-called ConfigMap.
Preliminary tests show a substantial difference in memory footprint.
A single-node Docker Swarm cluster used only around 100 MB of memory without any services, rising to 300 MB for a functioning Etherpad setup (including a MariaDB database).
The same setup using Kubernetes started at 700 MB. Problems arose inside our Ubuntu VPS with 1 GB of RAM when installing and updating the Etherpad service.
Additionally, a Kubernetes cluster will typically have at least one master node which does nothing other than managing the cluster (which applications run on which worker node, etc.). It is possible, though, to run a complete cluster on a single node, which we think should be enough for simple OpenAppStack installations. Docker Swarm runs on a single node out of the box, because master nodes are allowed to run application containers by default.
If you have tips on how to reduce the memory usage of a single node Kubernetes cluster, please do not hesitate to contact us.
The learning curve for the Docker Swarm ecosystem has been gentle, starting from basic knowledge of Docker images and containers. It’s pretty easy to get started and set up a simple cluster. However, Manageability and debugging-friendliness are hard to estimate at this stage.
Kubernetes has a comprehensible API, but before being able to configure Kubernetes for a web application one will have to learn about a lot of concepts. This is especially true when trying to set up Kubernetes on VPSs, instead of using of Amazon’s AWS or Google’s GCE. This makes the learning curve of Kubernetes relatively steep.
On the other hand, logging is nice. If your container is configured correctly, you can get your application logs by typing
kubectl logs <pod-name>.
By using Helm in combination with Kubernetes, the amount of work a developer needs to do to deploy different applications reduces, because they can use default templates for certain types of applications.
Well known / big community
Docker Swarm has ~5k Github followers and ~1.5k StackOverflow questions: substantial, but less than Kubernetes. On the other hand Swarm is part of the even bigger Docker ecosystem.
Kubernetes has a big community: several companies use it, there are several thousands of StackOverflow questions about it and its Github page is updated frequently and starred by more than 40.000 people.
Note however, that if Helm is used in combination with Kubernetes, that reduces the number of active users. The tool is used by some companies and advised by others, but (logically) the community is not as big as that of Docker or Kubernetes yet.
Both Docker Swarm and Kubernetes are open source projects.
Ease of porting existing applications
Docker Swarm uses so-called Docker Compose files, that specify how the different containers that make up a service are linked together, and to some extent specify their interactions with the cluster infrastructure (necessary storage volumes, published network ports, etc.). These Docker Compose files are also used outside Docker Swarm, and are occasionally provided by application developers.
It is possible to convert existing Docker Compose files for use with Kubernetes, but that does not mean they work out-of-the-box. Our experience with this conversion is sub-optimal. We do not expect many applications to already support Kubernetes out-of-the-box.
If Helm is used, there is a list of applications that are ready to use or integrate. For example, we were able to use the MariaDB helm chart to create a Kubernetes deployment of Etherpad.
It is always difficult to make a decision between two software packages, especially because there is never a one best choice.
We have developed a preference for Kubernetes. The main cause for this is the ease of use that Helm bring onto the table. We expect that Helm can play an important part in configuring applications and tending the OAS user’s needs. Furthermore, we have trust in the community that backs Kubernetes and expect development of both Helm and Kubernetes will continue for the forseeable future.