Cybersecurity & Digital Trust·13 min read··...

Deep dive: Green IT & sustainable data centers — what's working, what's not, and what's next

A comprehensive state-of-play assessment for Green IT & sustainable data centers, evaluating current successes, persistent challenges, and the most promising near-term developments.

US data centers consumed an estimated 176 TWh of electricity in 2023, representing roughly 4.4% of total national electricity demand. The Department of Energy projects that figure will reach 298 TWh by 2028, driven primarily by the explosive growth of generative AI workloads that require dense GPU clusters operating at power densities four to ten times higher than traditional compute. This trajectory places data center operators at the intersection of two conflicting imperatives: scaling infrastructure to meet surging demand while simultaneously reducing the carbon intensity of every workload they run.

Why It Matters

The environmental footprint of the data center industry extends well beyond electricity consumption. US data centers withdrew approximately 7.1 billion gallons of water in 2024 for cooling purposes, according to the Lawrence Berkeley National Laboratory. A single large hyperscale facility can consume 3 to 5 million gallons per day during peak summer months, straining local water supplies in regions already facing drought conditions. Microsoft disclosed that its global water consumption increased 34% between 2021 and 2023, attributing much of the growth to AI infrastructure expansion.

Corporate sustainability commitments have amplified the urgency. Google has pledged to run on 24/7 carbon-free energy (CFE) at every data center by 2030. Microsoft targets carbon negativity by 2030 and has committed to replenishing more water than it consumes. Amazon Web Services aims to power operations with 100% renewable energy by 2025 and achieve net-zero carbon by 2040. These commitments create cascading requirements for every organization using cloud infrastructure, because the emissions associated with cloud computing flow through to customers' Scope 3 inventories.

Regulatory pressure is intensifying in parallel. The EU Energy Efficiency Directive requires data centers above 500 kW to report key sustainability indicators including PUE, water usage effectiveness, and renewable energy share beginning in 2024. California's Title 24 energy code now applies to data centers with specific requirements for cooling efficiency and waste heat recovery. The SEC's climate disclosure rules, effective for large accelerated filers in 2026, require companies to disclose material climate risks, and for technology companies with significant data center operations, energy consumption and water use represent material exposures.

Key Concepts

Power Usage Effectiveness (PUE) remains the industry's most widely reported efficiency metric, calculated as total facility energy divided by IT equipment energy. A PUE of 1.0 would indicate perfect efficiency with all power going to computing. The global average PUE stood at approximately 1.58 in 2024, according to the Uptime Institute's annual survey. Hyperscale operators routinely achieve 1.1 to 1.2, while older enterprise data centers often operate at 1.6 to 2.0. Google reported an average PUE of 1.10 across its fleet in 2024, with its newest facilities in Finland achieving 1.08.

Water Usage Effectiveness (WUE) measures liters of water consumed per kWh of IT energy, capturing the environmental cost of evaporative cooling. Industry averages range from 1.0 to 2.0 L/kWh, but facilities using air-side economization in cool climates can achieve WUE below 0.5 L/kWh. Data centers in arid regions face growing scrutiny over water consumption, with facilities in Arizona, Texas, and Northern Virginia competing with agricultural and municipal water users.

Carbon-Free Energy (CFE) Percentage tracks the share of electricity consumed that comes from carbon-free sources on an hourly, location-matched basis. This metric is more rigorous than annual renewable energy certificate (REC) matching because it accounts for when and where clean energy is actually generated. Google's 2024 Environmental Report showed CFE percentages ranging from 96% at its Oregon campus to 39% at its Singapore facility, illustrating how geographic grid mix dramatically influences sustainability outcomes.

Embodied Carbon encompasses the greenhouse gas emissions associated with manufacturing, transporting, and installing data center infrastructure, including servers, networking equipment, cooling systems, and the buildings themselves. The Uptime Institute estimates that embodied carbon represents 10 to 30% of a data center's lifetime emissions, a share that grows as operational emissions decrease through renewable energy procurement. This metric is gaining attention as operators realize that refreshing server hardware every three years generates substantial upstream emissions.

What's Working

Hyperscale Efficiency Gains

The three major US hyperscale operators, AWS, Google Cloud, and Microsoft Azure, have demonstrated that purpose-built data centers can achieve dramatically better efficiency than the industry average. Google's custom-designed cooling systems use machine learning to optimize chiller, cooling tower, and air handler operations, reducing cooling energy by 30 to 40% compared to conventional approaches. AWS has deployed custom Graviton processors that deliver 60% better energy efficiency per compute unit than comparable x86 chips, reducing the energy cost of each workload.

Microsoft's deployment of liquid cooling at scale represents another significant advance. The company began deploying two-phase immersion cooling in production environments in 2024, submerging server motherboards in engineered dielectric fluids that boil at 50 degrees Celsius to absorb heat directly from chips. This approach eliminates the need for fans and reduces cooling energy by up to 90% compared to air cooling, achieving PUE values approaching 1.03 in test configurations. For GPU-dense AI training clusters that generate 40 to 80 kW per rack, liquid cooling has shifted from an experimental curiosity to an operational necessity.

Renewable Energy Procurement at Scale

US data center operators signed over 15 GW of new renewable energy power purchase agreements (PPAs) in 2024, making the sector the largest corporate buyer of clean energy globally. Amazon alone contracted 8.8 GW of wind and solar capacity in 2024, bringing its total renewable energy portfolio to over 28 GW. These are not merely certificate purchases; the majority represent long-term offtake agreements that directly finance the construction of new renewable generation assets.

The shift toward 24/7 carbon-free energy matching represents the next frontier. Rather than matching annual electricity consumption with RECs purchased from anywhere on the grid, 24/7 CFE requires hour-by-hour matching of consumption with carbon-free generation in the same grid region. Google, Microsoft, and Iron Mountain have joined the 24/7 Carbon-Free Energy Compact, and Google's internal analysis shows that achieving 90% hourly CFE matching requires three to four times more clean energy procurement than 100% annual matching, because it demands resources that generate during nighttime and low-wind periods.

Waste Heat Recovery

Scandinavian data centers have pioneered waste heat recovery systems that capture thermal energy from servers and feed it into district heating networks. Equinix's Helsinki facility supplies waste heat to the local district heating system, providing warmth for approximately 20,000 homes while reducing the facility's net carbon footprint. In the US, adoption has been slower due to the geographic separation between data centers and district heating infrastructure, but emerging projects in the Pacific Northwest and Northeast are demonstrating viability. Meta's facility in Lulea, Sweden, has achieved a near-zero effective carbon footprint by combining 100% renewable electricity with waste heat recovery that displaces fossil fuel heating.

What's Not Working

AI's Growing Energy Appetite

The rapid scaling of large language models and generative AI workloads is overwhelming efficiency gains. Training GPT-4 consumed an estimated 50 GWh of electricity, and inference workloads for AI services are growing at rates that outpace even aggressive efficiency improvements. The International Energy Agency estimates that AI-related data center electricity demand could grow from approximately 50 TWh in 2023 to over 150 TWh by 2026 globally. US data centers specifically are projected to require 47 GW of new power capacity by 2030, equivalent to adding the electricity demand of a midsized country.

NVIDIA's H100 GPU, the dominant chip for AI training, consumes 700 watts per unit. Dense AI training clusters pack 32 to 64 GPUs per rack, creating power densities of 40 to 80 kW per rack compared to 5 to 15 kW for traditional compute. This density exceeds the cooling capacity of most existing data centers and requires purpose-built infrastructure with liquid cooling, reinforced power distribution, and upgraded electrical systems. The result is a wave of new construction that generates substantial embodied carbon before a single workload runs.

Water Stress and Community Opposition

Data center water consumption has become a flashpoint for community opposition, particularly in water-stressed regions. In The Dalles, Oregon, Google's data center complex consumed approximately 28% of the city's water supply in 2022, prompting the city council to impose a moratorium on new data center water permits. Similar conflicts have emerged in Chandler, Arizona, and Prince William County, Virginia, where residents have challenged data center expansion over water and noise concerns.

The industry's response has been mixed. Some operators have invested in closed-loop cooling systems that dramatically reduce water consumption but at higher energy cost. Others have pursued air-cooled designs that eliminate water use entirely but sacrifice cooling efficiency in hot climates. Microsoft has piloted underwater data centers (Project Natick) that use ocean water for cooling, though the approach remains experimental. The fundamental tension is that the most water-efficient cooling technologies often increase energy consumption, creating a trade-off between water and carbon impacts.

Grid Capacity Constraints

The surge in data center power demand has exposed critical limitations in US electricity grid infrastructure. Dominion Energy in Northern Virginia, the world's largest data center market, has warned that new data center connections face wait times of three to five years due to transmission constraints. Similar bottlenecks exist in suburban Atlanta, central Texas, and the greater Phoenix area. Some operators have turned to on-site natural gas generation as a stopgap, undermining renewable energy commitments. Others are exploring small modular nuclear reactors, with Amazon acquiring a data center campus adjacent to a nuclear plant in Pennsylvania and Microsoft signing a PPA to restart the Three Mile Island Unit 1 reactor.

What's Next

Chip-Level Efficiency Revolution

The most impactful near-term development is the shift to purpose-built silicon optimized for specific workloads. Google's Tensor Processing Units (TPUs), Amazon's Trainium and Inferentia chips, and Microsoft's Maia AI accelerator each deliver two to five times better performance per watt than general-purpose GPUs for their target workloads. As these custom chips scale, they promise to bend the energy demand curve by delivering more computation per kilowatt hour. Intel's upcoming Clearwater Forest processors, built on an 18A process node, target 50% better energy efficiency than current-generation server CPUs.

Nuclear and Advanced Geothermal Partnerships

Data center operators are increasingly looking beyond wind and solar to firm, carbon-free power sources. Microsoft signed a 20-year PPA with Constellation Energy for nuclear power from the Crane Clean Energy Center (formerly Three Mile Island). Google signed an agreement with Kairos Power for small modular reactor capacity expected to come online by 2030. Amazon has invested in X-energy, a next-generation nuclear developer, and acquired nuclear-adjacent data center sites. Fervo Energy's enhanced geothermal project in Nevada began delivering power to Google's data centers in 2024, demonstrating that advanced geothermal can provide 24/7 carbon-free electricity in regions without conventional geothermal resources.

Circular Hardware and Extended Lifecycles

The industry is beginning to address embodied carbon through circular economy approaches. Google has committed to maximizing the use of recycled materials in its server hardware and extending server lifetimes from three to four years to five to six years where workload requirements permit. Dell Technologies' Luna concept demonstrates modular server design that enables component-level repair and upgrade rather than full-unit replacement. The Open Compute Project's sustainability working group is developing standardized metrics for server circularity, including recycled content percentage, design-for-disassembly scores, and component reuse rates.

Action Checklist

  • Measure and report PUE, WUE, and carbon-free energy percentage for all owned and leased data center capacity
  • Evaluate liquid cooling solutions for GPU-dense AI workloads exceeding 30 kW per rack
  • Transition from annual REC matching to hourly carbon-free energy matching where grid data is available
  • Include embodied carbon in infrastructure procurement decisions by requiring lifecycle assessments from hardware vendors
  • Assess water stress at each facility location using the WRI Aqueduct tool and set site-specific WUE targets
  • Extend server refresh cycles from three to four years to five to six years for workloads that do not require latest-generation hardware
  • Engage with local utilities on grid capacity planning to avoid natural gas backup dependencies
  • Require colocation providers to disclose PUE, WUE, CFE percentage, and Scope 1/2 emissions as contract terms

FAQ

Q: What is a realistic PUE target for a new data center in the US? A: New purpose-built facilities should target PUE of 1.2 or below. Hyperscale operators routinely achieve 1.1 to 1.15 through optimized airflow management, elevated cold aisle temperatures, and free cooling maximization. Facilities supporting dense AI workloads with liquid cooling can achieve PUE below 1.1. Retrofitting existing facilities to reach these levels is significantly more expensive and may only yield improvements to 1.3 to 1.4.

Q: How should engineers evaluate the trade-off between water consumption and energy efficiency in cooling system design? A: The decision depends on local conditions. In water-stressed regions (WRI Aqueduct score above 3.0), prioritize air-cooled or closed-loop liquid cooling systems that minimize water withdrawal, even if PUE increases by 0.05 to 0.1. In water-abundant regions with carbon-intensive grids, evaporative cooling that reduces total energy consumption may yield better net environmental outcomes. Conduct a combined water-carbon impact assessment using tools like the Green Grid's Carbon and Water Calculator.

Q: What is the real carbon footprint of cloud computing for enterprise customers? A: Cloud computing emissions vary dramatically by provider and region. AWS, Google Cloud, and Azure each publish carbon footprint tools for customers, but methodologies differ. Google reports location-based and market-based emissions with hourly CFE percentages. AWS provides estimated emissions by service and region. For accurate Scope 3 accounting, request provider-specific emissions factors rather than using industry averages, and prioritize workload placement in regions with the highest CFE percentages.

Q: Are small modular nuclear reactors a realistic power source for data centers? A: SMRs remain in the development and licensing phase, with commercial deployments unlikely before 2030 at the earliest. However, existing nuclear power provides a proven, carbon-free alternative. Microsoft's PPA with Constellation for existing nuclear capacity and Amazon's acquisition of nuclear-adjacent sites demonstrate that nuclear can serve data center loads today through grid-connected agreements. SMRs may eventually provide dedicated, on-site power for remote or capacity-constrained locations, but near-term nuclear strategies should focus on existing reactor fleet offtake agreements.

Sources

  • US Department of Energy. (2024). United States Data Center Energy Usage Report. Washington, DC: DOE.
  • Uptime Institute. (2025). Global Data Center Survey: PUE and Sustainability Metrics. New York: Uptime Institute.
  • International Energy Agency. (2025). Electricity 2025: Analysis and Forecast to 2027. Paris: IEA Publications.
  • Lawrence Berkeley National Laboratory. (2024). Data Center Water Consumption Trends and Projections. Berkeley, CA: LBNL.
  • Google. (2025). 2024 Environmental Report: Progress Toward 24/7 Carbon-Free Energy. Mountain View, CA: Alphabet Inc.
  • BloombergNEF. (2025). Corporate Clean Energy Buying Surged to Record in 2024. New York: Bloomberg LP.
  • Amazon Web Services. (2025). AWS Sustainability Report 2024. Seattle, WA: Amazon.

Stay in the loop

Get monthly sustainability insights — no spam, just signal.

We respect your privacy. Unsubscribe anytime. Privacy Policy

Article

Trend analysis: Green IT & sustainable data centers — where the value pools are (and who captures them)

Strategic analysis of value creation and capture in Green IT & sustainable data centers, mapping where economic returns concentrate and which players are best positioned to benefit.

Read →
Article

Green IT and sustainable data centers: where the efficiency and investment momentum is heading

A trend analysis examining the trajectory of data center sustainability, covering energy efficiency innovations, cooling technology shifts, nuclear and renewable procurement trends, and the growing regulatory pressure on digital infrastructure operators.

Read →
Deep Dive

Green IT and sustainable data centers: the hidden trade-offs and how to manage them

An in-depth analysis of the trade-offs between data center sustainability targets, performance requirements, and cost constraints, exploring how operators balance energy efficiency with compute demand growth driven by AI and cloud workloads.

Read →
Deep Dive

Deep dive: Green IT & sustainable data centers — the fastest-moving subsegments to watch

An in-depth analysis of the most dynamic subsegments within Green IT & sustainable data centers, tracking where momentum is building, capital is flowing, and breakthroughs are emerging.

Read →
Explainer

Explainer: Green IT & sustainable data centers — what it is, why it matters, and how to evaluate options

A practical primer on Green IT & sustainable data centers covering key concepts, decision frameworks, and evaluation criteria for sustainability professionals and teams exploring this space.

Read →
Explainer

Green IT and sustainable data centers: what it is, why it matters, and how to evaluate options

A practical primer on green IT and sustainable data center practices covering energy efficiency metrics, renewable power procurement, cooling innovations, and decision criteria for reducing the environmental footprint of digital infrastructure.

Read →