Kubernetes Stands on Nodes
Kubernetes does a great job orchestrating application containers. In order to run - containers need to be assigned to nodes. And at the end of the day it’s the nodes (i.e virtual or real machines) - their sizes and prices - that define the performance, reliability and cost of our clusters.
Yes, there’s networking too, of course. And storage matters. A lot. But still - crafting the perfect cluster starts with choosing the correct nodes for the job.
Just the Right Nodes
So how do we go about this? How do we choose the right nodes that will give us the ultimate bang for the buck? With hundreds of options offered to us by all the major cloud providers it definitely isn’t an easy choice. When I was first asked by a client to define the sizes of VMs for a production cluster we were building I was so tempted to say: “I guess 6 EC2 t2.xlarge instances will do the job…” But at the end of the day I’m an engineer - which means I’m not paid to guess. I’m paid to provide data-driven answers. Moreover - we had SLAs to adhere to.
So we needed to make sure our apps would have all the resources they need without going beyond the budget.
The solution:
1) run performance testing with peak expected load
2) calculate the required node sizes according to load test results
3)interpolate the instance prices in the calculations
4) provide a table with all the estimations - or else the project won’t get launched.
Load tests were not easy, but at least they were fun to build. But the whole calculation process - get the capacities of instance sizes, get their prices, account for pod overhead, account for each workload’s scaling factor, define what workloads will run where… Ugh!
Thankfully this is much easier today thanks to Node Size calculators like the one built by LearnK8s . The calculator is built on actual data ingested from the major cloud providers and allows to estimate node efficiency and total costs - given you know how many pods you need to run and how much resources each one of them will need. At the start of a new project rough estimations are usually all we have - and the node calculator allows us to plan for the reality we expect to encounter.
Off to a Great Start
But, alas, the actual reality very rarely plays out as expected. No matter how well we prepare - we need to be able to adapt continuously. That’s exactly why we switched from waterfall to agile in software development and later embraced DevOps practices to support small batch delivery and fast iterations on the infrastructure side. In just the same way - the initial calculation of node type efficiency is a great starting point, but the actual optimization gains can only be achieved after we start running the system in production and encounter real-life scenarios.
One of the reasons for that is that instance type calculators have certain inherent limitations.
The Limitations of Node Calculators
- Node calculators are static.
They require us to define the amount of resources required by the maximum (or average) amount of replicas. These definitions are based on our estimations. Or - in a best case scenario - results of performance tests. They don’t account for variations, seasonalities or unexpected peaks.
- Node calculators assume homogeneous workloads.
They won’t allow us to define more than one type of pod per node. This limitation is intentional. The default strategy of Kubernetes scheduling is to place pods on every node that has spare resources. Which means that the actual combination of pod types on a node is totally unpredictable - unless we define very strict affinity and placement constraints - in which case we’re giving up on automated bin packing and consciously sacrificing efficiency.
- Node calculators don’t account for cloud discounts
Smart utilization of cloud provider discounts (e.g using Reserved and Preemptiible instances) is an important part of our cost optimization strategy. Which is also impacted by our reliability requirements. The generic nature of node calculators doesn’t allow them to count these in - and thus limits their ability to pinpoint the exact instance types we should be using.
Improving the Calculations with JIT Node Provisioning
One way to overcome the limitations I’ve outlined is to ditch the statically-sized autoscaling groups and switch to Just-In-Time node provisioning like that provided by the Karpenter project. Yes, we’d still want to start with a node calculator to estimate the expected costs and design the initial NodePool config. And after that we can rely on the dynamic node auto-provisioning to supply the best nodes for the job.
That, of course, if we’ve defined the resource requests and limits for our containers correctly. And herein lies the hidden part of this puzzle!
Continuous Optimization with PerfectScale
Trouble is - all our initial estimations regarding the resource needs of the containers we put on the nodes are based on test data (in the best case) or simply guesswork - in majority of cases.
These estimations don’t measure the actual production traffic or try to correlate with HPA configs. Node calculations are great for the initial estimation, but are worthless for further optimization, that has to be done continuously. And that’s where PerfectScale completes the puzzle! By continuously monitoring the actual resource utilization of each container scheduled to your cluster and correlating this with the number of replicas and scheduling constraints we can provide data-driven recommendations for container resource allocation. Once our containers get assigned just the right amount of resources - we can ensure we get the optimal performance at the lowest possible cost. This also results in node provisioner (Karpenter) doing a much better job at selecting the correct instance types for the job.
With PerfectScale InfraFit you also get full visibility into your actual node resource usage, so you can further improve node efficiency by limiting your NodePools to the nodes with the best utilization.
And of course with PerfectScale automation you get autonomous continuous optimization that makes sure your clusters stay optimized while you deliver new features to your customers.
Summing Things Up
LearnK8s Instance Calculator is a great tool for selecting the initial node types to connect to your Kubernetes cluster. It’s also valuable for estimating the cost of running your workloads in production. But organizations that are serious about reducing their cloud costs without compromising reliability need to take the next step towards continuous optimization of their Kubernetes environments with the help of PerfectScale.