KubeCon + CloudNativeCon sponsored this post, in anticipation of KubeCon + CloudNativeCon EU, in Amsterdam, Aug. 13-16.

Kubernetes adoption has figured at the top of the enterprise wishlist for the last couple of years. Most companies adopt Kubernetes seeking productivity improvements, portability, resource efficiency and scale. These benefits are well documented in multiple case studies on the Kubernetes website.

What I’m more interested in is seeing how the introduction of Kubernetes in the enterprise technology stack can potentially impact the carbon emissions associated with its cloud or private infrastructure.

Data Centers and Carbon Emissions

Data centers are projected to consume about 3% of the global power output in 2020, resulting in 100 million tons of CO2 emissions. However, not all data centers are equally responsible.

The cloud, for example, is a definite improvement over private data centers for CO2 emissions. Take Jeff Bar’s article here, which indicates an 88% reduction in carbon emissions when companies run their workloads on Amazon Web Services (AWS) as opposed to their own data centers.

The reasons cited in that article are higher server utilization numbers, a lower power usage effectiveness (PUE) and a more energy-efficient mix in cloud data centers. Opinions differ though: the ease with which cloud resources can be provisioned can at times lead to increased consumption and a correspondingly larger carbon footprint.

Can Kubernetes Improve on the Cloud?

More importantly, however, can Kubernetes improve on the cloud in terms of carbon emissions? And if so, how? To figure that out, let’s dig deeper into the mechanics of carbon emissions in public cloud environments and on-premises data centers.

Carbon emissions in the context of both on-premises and cloud data centers are a function of four variables: Power Usage Effectiveness (PUE), server utilization, carbon emissions intensity of the electricity grid they are connected to and the overall power consumption. Let’s take a closer look at each of these variables.

Server Utilization

Server utilization numbers for cloud providers are hard to come by and can vary a great deal for each individual customer. For example, this report earmarks a best practice utilization of 70% and a worst-case utilization of 7% for cloud data centers. Cloud environments, however, have a definite edge on on-premises data centers where utilization typically varies between 5% to 25%.

Server utilization varies even more across individual customer environments in the cloud. Take Nordstrom for example which ran 1000s of VMs on AWS with an average CPU utilization of 4%.

Power Usage Effectiveness

Power usage effectiveness measures the total amount of power that a data center consumes compared to that consumed by the actual server infrastructure. PUE indicates the efficiency of a data center in reducing the amount of power consumed by non-server infrastructure.

According to NDRC, public cloud providers have a much lower PUE ratio (topping out at 1.1) as compared to privately run data centers. This essentially means that cloud data centers consume less power to operate the same sized infrastructure as on-premises ones.

Carbon Emissions Intensity

Carbon emissions intensity is the amount of carbon emitted in generating 1 KWh of power. Since most data centers are plugged into local power grids, emissions intensity is usually dictated by the fuel mix used in generating power. However, there are exceptions to this rule, such as when cloud providers and some data centers source power from renewable energy systems set up specifically for that purpose.

Now that we have a handle on what drives carbon emissions inside data centers, let’s examine how the introduction of Kubernetes can impact these variables and in turn the carbon emissions.

How Does Kubernetes Help?

Patrick Kirchhoff
Patrick is the co-founder and CEO of Replex, the first governance and cost-management platform purpose-built for cloud native and Kubernetes infrastructure. Patrick has a proven executive management track record and over 20 years of experience in the IT operations sector. Prior to Replex Patrick founded Conversis, an IT operations company that provides IT leaders advanced solutions to manage complex applications and online portals. Moreover, Patrick is a fellow of YTILI (Young Transatlantic Leadership Initiative), a program of the U.S. Department of State in partnership with the German Marshall Fund of the United States.

The introduction of Kubernetes into cloud environments has implications for two of the three variables highlighted in the previous section: server utilization and emissions intensity. We will examine both in the following paragraphs.

Server utilization is highlighted by the National Resources Defense Council as the single most important factor in determining the efficiency and in turn the carbon footprint of server infrastructure. Higher utilization means less resource wastage, fewer machines, reduced infrastructure footprint and less power required to run it. This, in turn, leads to a reduction in the carbon emissions associated with operating infrastructure.

Kubernetes provides significant improvements in infrastructure utilization. In fact, that is one of the top reasons enterprises look to adopt Kubernetes.

Let’s quantify these improvements using this Kubernetes case study from Nordstrom. This company operated thousands of virtual machines on Amazon Web Services with an average CPU utilization of 4%. Post Kubernetes the average utilization of these servers jumped to 40%.

The high utilization means that Nordstrom can operate the same workloads with only 1/10th of the VMs they needed pre-Kubernetes and allows them to scale down their infrastructure. This does wonders for the carbon footprint of Nordstrom itself, which can now report a 90% reduction in the carbon emissions associated with their use of AWS VMs.

Another variable associated with carbon emissions that Kubernetes can potentially impact is emissions intensity. Since Kubernetes is inherently portable, it allows much greater control over decisions about workload placement.

As mentioned earlier emissions intensity varies across cloud provider regions since they are usually plugged into local power grids. ITDMs can use open emissions APIs to evaluate regions based on their emissions intensity and make workload placement decisions.

The low carbon Kubernetes scheduler project is an interesting project in this context, which makes scaling decisions based on the emissions intensity across regions. There are obvious benefits to this approach e.g. scaling a workload into a cloud provider region designated as carbon-neutral or choosing one with a lower emissions intensity e.g. California (221g) over one with a higher intensity, e.g. Ohio (387g).

Conclusion

Data centers are projected to consume 3% of the global power output in 2020 resulting in 100 million tons of CO2 emissions. Cloud data centers operated by hyperscale IaaS providers score much lower on the emissions scale.

However, there is certainly room for a lot of improvement.

In this blog post, we reviewed the impact that Kubernetes can have on the two most important variables associated with carbon emissions in cloud and on-premise data centers: server utilization and emission intensity. Kubernetes significantly improves server utilization leading to a smaller infrastructure footprint and a corresponónsing reduction in carbon emissions. Portability and greater control over workload placement also allows ITDMs to pick and choose regions based on emission intensity and carbon footprint.

source: https://thenewstack.io/how-kubernetes-can-help-reduce-the-clouds-carbon-footprint/