Nowaday Docker Swarm is an entry level project, used just for study purpose.
But really there was a time when it was competing with Kubernetes, just because it is simpler, easy to setup and maintain, and with a quick learning curve.
These aspect make Docker Swarm ideal for starting to thinking about microservice architecture, design, then implement it.
In 2018 I had the opportunity to help a company on this technology shift (from a php-ran-by-crontab architecture). The goal was to integrate Gogs and Jenkins tool (already used for compiling and deployment of an Electron+React App) with a new Swarm.
The scenario was that Gogs and Jenkins both run in its own Swarm, served by Traefik deploying the service labels for discovering, and the will was to support another Docker Swarm for running production software, re-designed with SOA (miniservice and microservices). Both Swarm was running on Proxmox as clusters of Virtual Machines.

Architectural Choices
Having two distinct swarm, DevTool and Production, was a choice guided by separation of concern:
- DevTool was used to receive occasional heavy load, and deploy quickly was an expected feature, keeping machines separated does not impact the Production
- Security: Jenkins run as user accessing the Swarm API, and execute build and tests steps inside a dedicated container. In Docker Swarm there is no concept of RBAC! (Nowaday maybe there is a plugin for it, maybe)
The workflow is easily evident:
- gogs merge of a PR triggers a webhook
- webhook trigger Jenkins Job execution
- Jenkins retrieve the git repo
- Jenkins execute the pipeline defined for the job
- Jenkins use ssh-secret to authenticate on Production Swarm and launch a Service deploy (docker stack deploy) on Production
For security reasons, the ssh key is kept in a docker secret, and its public part is stored in .ssh/authorized_keys
of Production
VM and Containers: The cost of layering
The first and easiest critics on using Proxmox VM and Kubernetes or Docker Swarm is: how much resources are wasted for that?
VM: The answer lay on the great and native support in Linux for VM throw KVM subsystem, that allow to access protected hardware access, still avoiding to run a full OS on top of hosting machine. Some reference for details on performance:
- An IBM paper on I/O performance: https://public.dhe.ibm.com/linux/pdfs/KVM_Virtualized_IO_Performance_Paper_v2.pdf
- A journey on Libvirt configuration, https://github.com/avarghesein/-NIX/blob/main/Lubuntu%2020.04/Virtualization/Qemu-KVM/Performance, gives a good idea about the architecture behind, and its real impact on resource consumptions.
Containers: Linux Cgroups subsystem defines a protected environment for process running, this is completely supplied by the OS, at the kernel level. Here it is important to take into account this key facts:
- System calls require context switch: from user space CPU load, to Kernel CPU calculation or I/O operations
- Every time an I/O operation is involved the context is the same as one implementing Cgroups protected environment
So, no overlayed architecture is involved, no CPU cycles wasted: it is just a builtin feature that can be deployed or not.
Conclusion
This setup was easy to implement, really a lot of manual work because it was an experiment at that time. Anyway setup of Docker Swarm is just a matter of “docker swarm init”, then “docker swarm join”, not so different from kubeadm, but without all those features that are very useful in case of problem, but would had required too much time for implementing it in 2 months, and being productive.

Implementing a good workflow increases number of commits, and number of release cycles per month. In general, targeting a release day for the first days of a week, for a small company is a good choice, but also an automated workflow, with automated tests, lifts weight for developer, that has feedback from Jenkins about failing test: as you can see from the above graph, the number of commits increase during the mid of the week, to fix errors. During my experience I also settled up QA VMs to test the service in a protected/simulated environment.
Kubernetes add levels of controls and this a big game changer for reliability (i.e. Deployment rollback and Canary strategy, keep you safe from unexpected error in a mini- or micro-service).
Interested? Check out my Consulting Services and contact me to discuss which kind of solution fits better for your business!