Google Cloud thinks it has the answer to optimizing your company’s use of Kubernetes, saving you money and improving efficiency.
The cloud computing service has published a report on how best to run clusters of the container system, in the hope of educating users on its full range of capabilities and how maximize efficiency without compromising the experience of the end user or the reliability of related applications.
Some of the findings of the report include the importance of setting appropriate resource requests, the struggle to balance cost and efficiency with some clusters, and how elite performers take advantage of cloud discounts.
State of Kubernetes
In what they claim is a “large-scale analysis of Kubernetes clusters”, authors Anthony Bushong, Developer Relations Engineer at Google, and Ameenah Burhan, Solutions Architect at Google, identified four “golden signals” for optimizing costs whilst maintaining workload reliability.
Anonymized data was taken from Google Kubernetes Engine (GKE) clusters and sorted according to their performance as compared to the signals.
It turns out that setting requests for your workloads is the most important thing to do, and the report found that many users aren’t doing this. The authors say this is a problem, as “Kubernetes reclaims resources when node-pressure occurs.” Even workloads that require a minimum level of reliability still need to have requests set.
If requests aren’t set, then BestEffort Quality of Service (QoS) class are assigned to your Pods instead. These are the most vulnerable to termination if resources are scarce on a given node, which can lead to inconsistent performance and reliability issues with your workloads. What’s more, when such issues occur, they can be difficult to debug.
Thankfully, the GKE Workloads at Risk dashboard can locate workloads without set requests easily, as can a script using the kube-requests-checker. Once they have been set, you can then move onto workload rightsizing. As the authors explain:
“This golden signal is at the heart of the cost optimization journey; if requests more closely reflect reality, then the decisions Kubernetes makes using requests will be more effective.”
They conclude that, “No one team alone is responsible for Kubernetes cost optimization — rather, it’s a joint effort that spans developers, platform admins, and even billing and budget owners.”
“We also know that lessons from these findings are not one-time fixes. Rather, they are continuous practices that you should build into your team culture over time.”