Here I would like to discuss why I consider a single-node K3s to be the best option for smaller projects.
Before I proceed with this, let us define a “smaller project”. First of all, this would be a server-side project. Next, this is a project that has some back-end logic. Meaning, we are talking about something that is larger than a static website. For static websites, there are easier and frequently free options – see my post here.
Additionally, a smaller project should not be making use of high level compute resources, particularly – there should not be requirements for GPUs or hundreds of CPU cores.
A smaller project should also be ok with ~99.5% availability. If you are looking for a better numbers, a different approach is required – which in some cases could be a multi-node K3s cluster.
Still even after above requirements, practically all personal projects and many early and mid stage startup projects could be defined as “smaller projects”.
Table of Contents
Here are some alternative options that may be used to deploy smaller projects that I will consider:
- Serverless (Such as AWS Lambda functions)
- PaaS (Platform as a Service – such as Heroku or Google App Engine)
- Cloud Container Runtimes (Such as ECS or Google Cloud Run)
- Docker Compose and Docker Swarm
- Managed Kubernetes Solutions (Such as EKS, GKE or AKS)
- Other Kubernetes Flavors
Some teams also may use bare VMs to deploy smaller projects – but I believe that in 2023 this has too many operational disadvantages relative to other solutions – and not something I could recommend in any case – so I would not be considering this option below.
II Cost Considerations
Cost considerations are always extremely important for smaller projects.
Let us first calculate costs for a single-node K3s server. Hetzner cloud – one of the most cost-efficient options out there – allows for a 8-CPU AMD x86 servers with 16GB RAM and 240GB Hard Drive to be provisioned for about $33 per month.
This should be enough for practically any smaller project out there.
From our contenders, Docker Compose and Docker Swarm would have exactly same cost footprint. Other Kubernetes flavors would have also either the same cost or higher.
Managed Kubernetes solutions (such as EKS, AKS) are significantly more expensive – as they require at least 3 worker nodes, also the cloud providers offering managed Kubernetes are usually more expensive than Hetzner. Therefore, the cost footprint here will be at hundreds of dollars per month.
There are 3 remaining contenders to discuss: Container Runtimes, PaaS and Serverless. These are actually much harder to calculate – and this is part of the problem here.
Let us start with PaaS pricing as they are much easier to discuss. Looking at Heroku pricing, an application that falls in the range of our “smaller project” should cost between $25 and $500 per month to run. It is important to mention here that even in the bottom range that is not much different from $33 per month of our K3s server. But at the top range we have a huge advantage.
Additionally, a test or staging K3s cluster allows to run multiple applications at the same time within its resource capacity – but on PaaS, each of those extra applications would have additional cost. Usually, we wouldn’t be doing this in production for security reasons, but it is perfectly reasonable for testing.
The situation with Container Runtimes and Serverless is similar to PaaS, but it is even harder to calculate. That is because prices are given per vCPU time – which is almost impossible to predict without historical data.
Again as we saw with PaaS, at low usage tiers – and depending on the cloud provider – you sometimes pay nothing or almost nothing. But let us see what happens at top use capacity. For example, look at pricing for AWS ECS (Fargate) – it costs $0.04 per vCPU hour plus some traffic costs that I will ignore for now. Let us now assume that we use 8 vCPU Fargate running continuously for a month – that would be 8 (vCPU) x 720 (hours per month) * $0.04 (price per hour) which comes to ~$230 / month, plus some traffic overhead and some additional costs likely required to host things like managed database or such.
Again, we can see that things are cheaper on these platforms for smaller loads but as usage grows they eclipse costs of our K3s server. It is also important to mention that there is always a risk of a bill shock in case of uncontrolled containers or lambda functions – using more resources than expected (due to a bug for example) – with potential to rack up bills with no control.
Concluding the conversation on costs, I believe that ~$33 per month for a K3s node is a reasonable amount to pay for a scalable system where you do not have to worry about uncontrolled and uncapped possible rise of costs.
III Other Cost Considerations
I feel it is important to mention here, that there are few other costs that are normally associated with running a smaller project. At the very least we are talking about some sort of container image and S3 storage – for backups.
Note that these costs are no different regardless of the chosen problem. However, due to those, I would bump budget expectations for running a single-node K3s instance to ~$50 / month.
IV Availability Considerations
A big question for any single-node deployment is availability. While there are different researches and analytics, it is common to estimate data-center server reliability as 1 day outage per 1000 days. I also saw a research showing 95% worst case availability for the 1st year hardware (but that does not translate to 95% availability overall).
Conservatively, I would say that 99.5% – or 1.83 downtime days per year – availability estimate is reasonable for a single-node system. That is assuming a good recovery process is in place (including relatively quick deployment system and offline data backups). In practice, I usually see numbers much better than that.
For further reading on the subject, please see few links here, here and here that discuss feasibility of a single-node server architecture in various scenarios – claiming that it is frequently not such a bad idea contrary to other popular opinions.
It is clear that properly configured 3+ node Kubernetes solution (including 3+ node K3s clusters), especially with nodes spread across different cloud availability zones, generally provide better availability guarantees. However, that comes with at least 3x cost increase – as we saw above.
Therefore, while managed and other multi-node Kubernetes solutions are usually better in terms of availability, we are still going to meet availability guarantees of a smaller project with just a single-node K3s.
Speaking about PaaS, Serverless and Container Runtime solutions – their availability is defined but whatever platform SLAs specify. Note here, that it is also worth looking at incident history per specific platform – and while general expectation is that availability guarantee for a managed service should be high, it is possible to discover circumstances of major incidents where availability would even fall below what we would normally expect from a single-node K3s cluster.
V Cold Start Problem with Serverless
A known issue with Serverless, Container Runtimes and PaaS types of hostings is a “Cold Start” problem. This happens if there were no requests to your application for a while, and then they come. In such cases, the platform must cold-start respective containers or functions – that takes additional time and may lead to poor performance.
While there are numerous work-arounds, they are either not perfect or have significant cost implications.
Clearly, when using a single-node K3s solution, we do not have to deal with this issue – which gives another point to a K3s single-node solution versus Serverless, PaaS and Container Runtime types.
If you find your project in a situation that it no longer fits a single-node K3s cluster, your options are:
– Increase size of the node
– Move to a multi-node K3s cluster
– Move to a different, larger Kubernetes cluster – and you can use any solution out there, such as managed EKS, GKE or AKS clusters
While some changes are usually required to migrate between different Kubernetes flavors, they are minimal, since base Kubernetes platform is the same. Therefore, operational costs of extending the application in the future – when it is no longer a “smaller project” – are quite minimal.
That is generally not the case with any non-Kubernetes solutions. For example, for projects that started as Serverless, when they grow – they find their cloud costs ballooning. At which point, they are looking to migrate to a Kubernetes-based solution to better manage the costs, but it now takes significant effort – and time.
The situation is somewhat better with Docker Compose, Docker Swarm or Container Runtime types of solutions – since at least they already deal with containerized applications. However, even in those cases, the effort is always greater, than when doing migration from K3s to a different Kubernetes flavor.
VII Platform Portability and Vendor Lock-In
It is quite clear that when dealing with Serverless, Container Runtime or PaaS type of solutions, there is inherent Vendor Lock-In. Obviously, this can be overcome when migrating to a different platform – but as discussed above, it requires additional effort – which may be significant.
Note here, that Docker ecosystem is also quite portable, however with recent licensing changes Docker Desktop solution is not an option for many organizations. Rancher Desktop has a friendly Apache-2.0 license – which is usually accepted by any organization out there.
VIII Ease of Configuration
Big selling point for PaaS and Serverless types of solutions is minimum level of infrastructure-level configuration needed to run applications on these platforms.
There is also a popular opinion that anything related to Kubernetes has a very steep learning curve and is prohibitively hard for anyone who has not been doing DevOps for many years. I believe this opinion no longer holds true.
While Kubernetes application building indeed requires certain time investment, a lot of people in the industry who worked with containers in different environments already have knowledge of basic building blocks. Also, as discussed above, this type of knowledge will likely be required in any case but at later stages of the project. Therefore, I prefer to consider such learning a long term investment that will be useful earlier or later.
Also, consider here for example my recent tutorial on end-to-end Kubernetes CD experience – with modern tooling the overall complexity of deploying an application to Kubernetes has decreased substantially from what it used to be several years ago.
Based on my experience, for a smaller project, and with a proper guidance, it should not take longer than a month to acquire proper skillset for Kubernetes deployment within the team.
IX Summary and TL;DR
A single-node K3s cluster in the cloud has a capped cost of about $50 per month which is acceptable for most smaller projects.
For this cost it gives great portability, future scalability and acceptable level of availability guarantees.
With all factors summarized, I believe that currently a K3s single-node solution is superior to any other option of hosting a smaller project, particularly:
- Serverless may have unpredictable ballooning cost, has cold start problem, comes with a vendor lock-in and lacks future scalability.
- PaaS and cloud container runtimes do not usually have any meaningful cost advantage at lower usage tiers and have significant disadvantage at higher usage tiers, come with a vendor lock-in and lack future scalability.
- Docker Compose and Docker Swarm are similar to K3s solution in many aspect but lack portability due to licensing restrictions and future scalability.
- Managed Kubernetes Solutions have much higher cost footprint.
- Other Kubernetes Flavors usually cannot compete with K3s in portability.
Having settled on K3s, I am planning to write another blog post in the future showing basic set up of a single-node K3s cluster – mainly focusing on Let’s Encrypt integration and off-site back-ups – as practically everything else comes pre-configured out of the box.