Why I switched from Kubernetes & Co. to normal web hosting

In this article, I would like to describe my “journey” from a “containerized” web app back to a standard web app on a standard web hosting package, as well as the reasons that led me to do so.

It’s about my website mitbringen.net, which is now almost 14 years old, has undergone a number of technical changes over the years and I’m still actively developing it. Between 10 and 40 users visit the website at the same time every day.

On mitbringen.net you can basically create a list of people to bring along to any event for free, with just 2 clicks and without registering, and collaborate on it with other people. It helps with the organization of school parties, sports festivals, tournaments, garden parties, birthdays and much more and has been growing in popularity for years, especially among schools and clubs.


First of all, I would like to make it clear that I am not an opponent of container-based applications or microservice architecture – on the contrary: I recently received my “Certified Kubernetes Application Developer (CKAD)” certificate from the Linux Foundation and the Cloud Native Computing Foundation (CNCF). I really enjoy working with Docker and especially Kubernetes and would like to use it productively for one of my projects. I also work for a cloud provider and not a web host.
Nevertheless, I have opted for normal web hosting – and even rebuilt my website, which was converted to container architecture about 5 years ago, so that it runs on “normal” web hosting.

Ressource overhead

I had been wondering for some time whether it wasn’t a bit excessive for my website to build a 500MB Docker container in the Github pipeline for the release of a small change in the PHP code and then push it into the Docker registry and have the server download it again via the Internet.
At that time, I had always built and delivered a container with the PHP software and the required extensions as well as the PHP code of the website – surely this could have been structured even better and the website code could have been separated from the PHP container. Nevertheless, with Docker containers you always “ship” a lot of additional ballast that does not necessarily change. Individual components of Docker containers are stored in layers and Docker is also resource-saving here and only transfers the changed layers if possible. However, as a developer, you have no control over this.

Administrative work or high costs

A container ship

However, it wasn’t the greater resource requirements that really tipped the scales, but the fact that my (then still Nomad) cluster regularly caused problems when I wasn’t near my computer – i.e. at night or at the weekend. Sometimes the hard disk was full, sometimes there were problems with the assigned IP address, etc.
As a developer, I don’t really want to worry about running the cluster, just my own software. So a managed Kubernetes cluster would have been the right thing for me. There are many offers – both from the large hyperscalers such as Google Cloud (GKE), AWS, Microsoft Azure (AKS), as well as from smaller providers such as Digital Ocean or Linode. Their offers are still reasonably affordable, but most providers are from the USA and I would like to host my site in Germany. A managed Kubernetes cluster is of course also available in Germany, e.g. from Ionos or gridscale. However, the prices are simply too expensive for a private hobby project that is not yet really profitable.
Even with just one node, you get between €160 and €300 per month – for that you get just 2 vCores and 2GB RAM per node. All components of the application then have to run on this – or you pay even more. In my case, this would have been a mySQL database and a websocket server in addition to the PHP container, so the RAM would have been pretty tight.

The rescue

During the last Black Friday, there were not only electrical appliances that you don’t actually need at rock-bottom prices, but many web hosts also used the day for offers.
For example, Netcup offered a web hosting package for just €3.76 per month (these offers are often available at Netcup on various occasions (https://www.netcup-sonderangebote.de/), and the prices are otherwise also very low – even with other web hosting providers such as Hetzner). The Netcup package (Webhosting 4000) includes: 6GB guaranteed RAM, 250GB SSD storage space, free traffic, MySQL, PHP (different versions to choose from) and e-mail server. All configured, managed and monitored by professionals. On top of that, there were 6 .de domains for free. But: No Docker, no Kubernetes, no Nomad, no root access to anything. I would no longer be able to run my Websocket server either, that was clear to me. Nevertheless, I got this offer (1 year minimum contract term was okay for me at the price) and just wanted to see if I could get my website up and running there.


As my website, like I described before, was already built on a containerized architecture (or microservices if you want to call it like this) and it also used individual software components, I had to build back things.

The websocker server

It was clear to me from the start that the biggest problem would probably be the websocket server. This sends push notifications to the browser version of mitbringen.net (there is also an Android and an iOS app – push notifications are sent via Google and Apple respectively).
The websocket server keeps a specific port open, an HTTP proxy forwards requests to a specific URL to this port (and also handles the SSL handshake). Opening any port on a shared hosting is of course not possible, because it would be open for all customers or would block other customers working on the same server. So I had to put the initial idea of converting the server into a cron job that – as long as the web host allows it – keeps the port open and then simply restarts it with the next cron interval out of my mind.
So I had to solve this part completely differently; a browser-side “pull query” is now used here. Here, the server is now asked for new “push” messages every 10 seconds via a Javascript interval. Although this is no longer 100% real time, it is still within reasonable limits.


However, the new way of “pushing” (now “pulling”) led to the next problem; the limits. Now a website user claimed at least 2 database connections for himself (or more if several tabs were open). As a result, I quickly reached the “max_user_connections” limit of 50 connections per mySQL user. When I asked the webhoster support (I had to wait 3 days for an answer – mass hosting) whether the limit could be increased for an extra charge or in a larger package, they told me that this was not possible. More than 50 connections per user are only available in the “Managed” packages (min. 50€ per month). However, 300 connections per customer would be possible.
So the solution was clear: the pull script now uses its own MySQL user. Nevertheless, Sentry still reported the “already has more than ‘max_user_connections‘” problem from time to time. This happened at peak times, even if there weren’t 50 users on at the same time (but 40) – presumably due to the asynchronous structure of the page – so several components were requested in parallel. My trick: I created another MySQL user, the script now randomly selects one of two MySQL users with each call, with which it establishes the database connection. So theoretically (if the random generator distributes ideally – which it doesn’t) 100 simultaneous database connections would be possible for the backend +50 for the pull script. This principle could of course be extended a little further – until the 300 simultaneous connections per customer are exhausted. But here you have to say quite clearly: If the website has such a strong user growth, it might be worth using a Managed Kubernetes offer at some point. But that would be a different story…

Slow connection to external services

Human feet on a hammock

The next problem actually concerned the push notification to the app versions, which actually run via Apple/Google. Although it is possible to establish a connection to their push services via HTTPS in standard web hosting without any problems, the connection setup is quite slow (I don’t know whether the name resolution, the SSL handshake or the transmission is so slow). In any case, the user experience was severely clouded when an action that triggered a push to a mobile device now had quite long loading times.
I solved this problem with a cronjob that – similar to the pull script – retrieves accumulated “push” notifications and then sends them to the Apple/Google services. The user’s action then only triggers a database entry, not a direct connection to the push service.

Update please!

Another “problem” was the PHP version 5.6, which was no longer offered by Netcup. But it was high time to update to a newer version anyway, so the code was converted to PHP 8.2 – which was not too complicated.


The last problem concerned the deployment of the applications. Previously, Docker containers were built using Github Actions, uploaded to the Github registry and then Nomad on my server was asked to download the current container using the Nomad API. It would no longer be quite so modern in future, as Netcup only offers SCP or FTP for uploading files. With SCP, it quickly became clear that this was far too slow for thousands of small files. So only FTP remained. Fortunately, I found a ready-made Github action that offers a differential FTP upload, so the first deployment took a very long time, but from now on only the changed files are uploaded when a change is made to the website. These are usually only a few kilobytes.

Container vs. web hosting features

Here is another comparison of the features of container-based hosting and normal web hosting. The points of container-based web hosting are suitable for both Kubernetes and Nomad.

You can also find a good overview about monolithic applications vs. microservices in this article: https://dev.to/alex_barashkov/microservices-vs-monolith-architecture-4l1m

Focus on high availability and replaceability of individual services (microservices)
The microservice architecture is by definition inefficient in terms of resource consumption. All components communicate with each other via a network and there is a surplus of identical software parts in the various containers. On the other hand, all parts can be exchanged easily, meaning that even a huge team of developers can work on a single software stack and publish several new versions per day without the application going offline for a short time. Redundancies can also be created to ensure availability in the event of software errors in individual containers.
Focus on efficiency (as many instances as possible on one server)
With normal web hosting, there are central services (e.g. PHP, mySQL…) which are used by various applications. The parameters of the software can therefore be optimally adapted to the available performance of the server and fully utilize it. Only one directory is available for the web application and the entire stack of the application is together and has dependencies on each other (=software monolith). If one part of the software is updated, this directly affects all dependent software parts. A brief website outage when a new version is published cannot be ruled out.
Rolling update possible
Due to the microservice architecture, there are dependencies via defined interfaces. This means that individual services can be exchanged without affecting dependent services. This makes a so-called “rolling update” possible in which either only a certain number of users receive a new version to test whether it works without problems (blue/green deployment) or at least the old version continues to run until the new version has been fully rolled out and is ready. Only then is the old version stopped and all traffic is routed to the new version (canary deployment)
Update immediately affects the live version
As all services access the same file structure and no entire containers are delivered, the entire application is potentially affected as soon as the first file is uploaded. If there are problems during the upload, an inconsistent version can be the result. The website can theoretically fail as soon as the first file is replaced by a new one. The site must therefore be put into maintenance mode beforehand.
Freedom in the choice of components
As developers can define their containers as they wish, they are completely free to choose their software components and the versions used – apart from resource limits. However, the developer must also test his constellation himself.
Tied to components installed by the web host
With web hosting, certain standard components are usually available, with a bit of luck even in different versions. Beyond that, however, there is no possibility of using individual software.
No limits
Apart from the “natural limits” of the nodes or the limits specified by the cluster, the developer can freely allocate the resources to individual software components. If, for example, the MySQL server should be able to use the majority of RAM and the PHP server can manage with less, the developer can configure this.
Limits from the web hoster
With (shared) web hosting, all customers should receive guaranteed resources from the server. It must therefore be prevented that individual customers take up certain resources completely for themselves. There are therefore fixed limits per thread or connection, which are calculated in such a way that no customer takes up too many resources. For example, the number of connections to the mySQL server is limited per customer, as is the maximum memory that a PHP application may reserve – even if the server has just enough capacity.
Dependent on various third-party providers
The usual Docker container uses and installs software components from various sources. Starting with the base image, which is loaded from some Docker registry, to Linux software that is installed via a package manager, to any other packages that are loaded via other specific package managers. All of this is potentially reloaded and installed with every container build. As a developer, you therefore have the choice of operating certain registries yourself (= additional costs) or somehow providing the software yourself – or you become dependent on the operators of public registries and software databases. Of course, the providers of certain software components can also remove their software from the registries at any time (“dependency hell“)
Software is provided by the web host or delivered with the application
The subsequent installation of 3rd party software is rather unusual with standard web hosting. The software is uploaded with the application – if not provided by the web host – and therefore comes from its own repository. The license issue may need to be clarified here, but the software and the source are under the control of the developer.
Custom configuration of the services
You can use ready-made containers for most services, which usually come with a sensible standard configuration that is suitable for many use cases. However, you have to test the configuration yourself. It may often be necessary to make adjustments during operation.
Preconfigured by the web host
With web hosting, the web host has already preconfigured all services. Customization is usually not necessary (and also not possible). The configuration has usually been tried and tested over a long period of time and is also set by professionals.
Microservices vs. Monolith


spider webs

Not every project is worth doing what everyone else is doing and running it via Kubernetes. I have the feeling that due to the current Kubernetes hype, many mini-websites are operated with far too much overhead just because it is currently “state of the art”.
For some projects, it makes sense – at least in the beginning – to run them via a normal web hosting package. Even if these offers have a rather dusty image – they are often cheap and definitely still have a right to exist.

Of course, you should have a plan B in mind – if the user base of the application grows quickly, it is difficult to scale up the resources via normal web hosting.

But for websites that don’t use special software components and that need to grow first, it’s worth thinking alternatively and swimming against the tide. In any case, I now have a website that runs quickly for less than €4/month and has never had a single outage. If there is an outage, I don’t have to worry about fixing it – web hosting professionals take care of it with a 24-hour emergency service.
Although I would prefer to use containers with a free choice of software, Kubernetes and Linux commands to manage my website instead of clicking through the Plesk interface, the advantages outweigh the disadvantages at the moment.

Translated with DeepL.com (free version)


Dad, Husband, Web-Developer, Hobbyist, hobby gardener

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

WordPress Cookie Notice by Real Cookie Banner