AI & Emerging Tech·12 min read··...

Myths vs. realities: Compute, chips & energy demand — what the evidence actually supports

Myths vs. realities, backed by recent evidence and practitioner experience. Focus on KPIs that matter, benchmark ranges, and what 'good' looks like in practice.

Opening stat hook: Global data centre electricity consumption reached 415 TWh in 2024—1.5% of worldwide electricity—and is projected to more than double to 945 TWh by 2030 (3% of global electricity), with the United States and China accounting for nearly 80% of this growth, while individual AI training runs could require 1 GW by 2028 and 8 GW by 2030, equivalent to the output of multiple nuclear power plants (IEA, 2025; RAND Corporation, 2025).

Why It Matters

The energy footprint of compute infrastructure has become a first-order sustainability concern for procurement professionals, sustainability leads, and policymakers. What was once a niche issue—data centres consuming modest percentages of grid electricity—has transformed into a strategic challenge as generative AI, cryptocurrency mining, and cloud computing drive exponential demand growth. For UK-based organisations, understanding this landscape is critical for Scope 2 and Scope 3 emissions accounting, regulatory compliance, and strategic technology decisions.

The numbers are staggering. US data centres consumed 183 TWh in 2024—4.4% of national electricity—with projections suggesting 426 TWh by 2030, a 133% increase (Congressional Research Service, 2025). AI-specific servers in the US alone consumed 53–76 TWh in 2024, projected to reach 165–326 TWh by 2028 (Lawrence Berkeley National Lab, 2024). At the chip level, power draw per GPU has escalated from 400 watts in 2022 to 700 watts for state-of-the-art generative AI chips in 2023, with next-generation Nvidia Blackwell-class GPUs reaching 1,200 watts in 2024.

For procurement teams, these trends have direct implications. Hyperscaler capital expenditure reached approximately $200 billion in 2024, with Google alone committing $75 billion to AI infrastructure in 2025. SoftBank, OpenAI, Oracle, and MGX have announced $500 billion in US data centre investments over four years. This spending translates to contracts, vendor relationships, and Scope 3 emissions that procurement must evaluate, measure, and report. The UK's energy grid, already under stress, faces localised impacts as data centre clusters compete for electricity with residential and industrial consumers.

Yet the discourse around compute energy is plagued by myths—claims that efficiency gains will solve the problem, that renewable energy commitments eliminate carbon footprints, or that AI's energy consumption is negligible compared to other sectors. The evidence tells a more nuanced story.

Key Concepts

Power Usage Effectiveness (PUE): The ratio of total facility energy to IT equipment energy. A PUE of 1.0 would indicate all energy goes to computing; typical values range from 1.1 (hyperscale) to 2.0+ (legacy enterprise). Improvements in PUE have historically offset some demand growth, but this lever is approaching physical limits.

Compute Intensity vs. Efficiency: AI computing capacity (measured in FLOPS—floating-point operations per second) grew 50–60% quarter-over-quarter since Q1 2023, while chip efficiency doubles only every 2.5–3 years (MIT Technology Review, 2025). Demand for compute is rising 4–5× faster per year than efficiency gains through 2030. This arithmetic means total energy consumption increases despite per-operation efficiency improvements.

Scope 2 and Scope 3 Emissions: Scope 2 covers electricity purchased for operations; Scope 3 includes embodied carbon in equipment, data transmission, and end-user device emissions. For cloud customers, Scope 3 from cloud computing represents a growing and often poorly measured category. Data quality challenges—inconsistent methodologies across providers—complicate accurate reporting.

Power Purchase Agreements (PPAs): Long-term contracts for renewable electricity that hyperscalers use to claim carbon neutrality. However, temporal and locational matching matters: purchasing solar PPAs for 24/7 data centre operations does not eliminate nighttime fossil fuel consumption unless paired with storage or around-the-clock clean energy matching.

KPIBenchmark RangeWhat "Good" Looks Like
Power Usage Effectiveness (PUE)1.1–2.0<1.2 for hyperscale, <1.5 for enterprise
Carbon intensity (gCO₂e/kWh)200–800<100 with 24/7 clean energy matching
GPU power draw per chip400–1,200WOptimise workload scheduling to reduce peak utilisation
Scope 3 cloud emissions accuracy±30–50% uncertainty<20% uncertainty with provider-specific data
Renewable energy % (claimed)50–100%100% with hourly matching, not annual averaging
Water Usage Effectiveness (WUE)1.0–2.5 L/kWh<1.5 L/kWh indicates efficient cooling
Training run energy consumption10–100 MWhBenchmark against model capability gains

What's Working

Hyperscale Efficiency Leadership

Hyperscale data centres operated by Google, Microsoft, Amazon, and Meta achieve PUE values of 1.1–1.2, far superior to legacy enterprise facilities at 1.5–2.0. This efficiency stems from purpose-built facilities, advanced cooling systems, and workload optimisation. Google's 2024 sustainability report claims 90%+ hourly carbon-free energy matching in several regions, representing the gold standard for renewable integration.

Example 1: Google's Carbon-Intelligent Computing — Google shifts deferrable computing workloads (batch processing, non-urgent training runs) to times and locations with the highest carbon-free energy availability. This temporal and spatial load-shifting achieves 10–15% additional emissions reductions beyond renewable procurement alone. For procurement professionals evaluating cloud providers, such capabilities represent meaningful differentiators in Scope 3 management.

Chip Efficiency Improvements

Modern AI chips are 99% more efficient than 2008 models on a power-per-computation basis (MIT Technology Review, 2025). GPU efficiency doubles every 2.5–3 years, extending Moore's Law dynamics into the AI era. Nvidia's Hopper and Blackwell architectures achieve significant performance-per-watt improvements over previous generations, meaning that equivalent AI capabilities require less energy than they would have with older hardware.

Example 2: Microsoft's Custom AI Chips (Maia 100) — Microsoft developed custom AI accelerators optimised for Azure workloads, achieving superior energy efficiency compared to general-purpose GPUs for specific inference tasks. The strategy—building application-specific silicon rather than relying solely on commercial GPUs—represents a procurement consideration for organisations with sufficient scale to influence hardware development or negotiate access to optimised infrastructure.

Renewable Energy Commitments

Major cloud providers have committed to 100% renewable energy, with Amazon Web Services, Google Cloud, and Microsoft Azure leading corporate renewable procurement globally. These commitments drive substantial renewable capacity additions—hyperscalers collectively represent the largest corporate purchasers of clean energy worldwide.

Example 3: Amazon's Clean Energy Investments — Amazon has invested in over 500 renewable energy projects globally, totalling 28 GW of capacity. For procurement professionals, AWS's renewable commitments translate to lower Scope 3 emissions when cloud workloads are attributed appropriately. However, evaluation requires scrutiny of matching methodology—annual averaging versus hourly matching produces very different carbon outcomes.

What's Not Working

Efficiency Cannot Outpace Demand

The fundamental challenge: demand growth exceeds efficiency gains by a factor of 4–5× annually. Total server energy use tripled from 2014 to 2023. GPU-accelerated AI servers grew from less than 2 TWh in 2017 to over 40 TWh in 2023 (MIT Technology Review, 2025). The RAND Corporation projects that AI data centres alone could require 327 GW of power capacity by 2030—equivalent to the entire electricity generation capacity of several European countries combined.

This arithmetic invalidates the claim that efficiency improvements will contain energy growth. While efficiency reduces energy per operation, the explosion in operations (training larger models, deploying AI inference at scale, expanding cloud services) overwhelms efficiency gains. Procurement cannot rely on technology efficiency to solve the emissions challenge.

Localised Grid Strain

Global percentages obscure regional crises. Northern Virginia hosts approximately 4,000 MW of data centre capacity—the world's largest market—creating infrastructure strain that has driven 20% electricity rate increases in the PJM region for 2025. Ireland's data centres consume 21–22% of national electricity, potentially reaching 32% by 2026. Five or more US states see data centres consuming over 10% of state electricity, creating localised political and infrastructure conflicts.

For UK procurement, this trend suggests that data centre geography matters for both resilience and regulatory risk. Regions with data centre concentration face permitting constraints, grid interconnection delays, and community opposition that can affect service availability and cost.

Renewable Matching Gaps

Claims of "100% renewable" energy often rely on annual averaging—purchasing enough renewable certificates to match annual consumption—rather than hourly or real-time matching. A data centre operating 24/7 but backed by solar PPAs still draws fossil-fuel-generated electricity at night unless storage or alternative clean sources fill the gap.

Only a minority of facilities achieve true 24/7 carbon-free energy matching. Google's Carbon-Free Energy percentage varies from 90%+ in favourable regions to 50–60% in fossil-heavy grids. Microsoft's "100% renewable by 2025" commitment similarly encompasses a range of matching quality across facilities. Procurement professionals should request location-specific carbon intensity data rather than accepting corporate-wide averages.

Water Consumption Overlooked

Data centres consumed 17 billion gallons of water in the US during 2023, with hyperscale and colocation facilities representing 84% of usage (Congressional Research Service, 2025). Water-cooled systems offer energy efficiency advantages but create resource competition in water-stressed regions. Liquid cooling—emerging as necessary for high-density AI racks—requires careful water management planning.

For UK facilities, water consumption may seem less acute than in arid regions, but environmental permitting and corporate water stewardship commitments increasingly require measurement and disclosure.

Key Players

Established Leaders

  • Nvidia — Dominant GPU supplier for AI training, Blackwell architecture (1,200W chips) defines energy benchmarks for next-generation systems
  • Google Cloud — Industry-leading carbon-free energy matching (90%+ in some regions), carbon-intelligent computing capabilities
  • Microsoft Azure — Custom AI chips (Maia 100), aggressive renewable commitments, carbon-negative by 2030 target
  • Amazon Web Services — Largest corporate renewable energy purchaser globally, 500+ clean energy projects, 28 GW capacity

Emerging Startups

  • Crusoe Energy — Uses stranded natural gas for data centre operations, reducing methane emissions while providing compute
  • Lancium — Flexible data centre operations that can absorb excess renewable energy and curtail during grid stress
  • Applied Digital — Building data centres optimised for high-performance computing with renewable energy integration
  • Cerebras — Wafer-scale AI chips designed for efficiency at model training scale, alternative to GPU-centric architectures

Key Investors & Funders

  • Brookfield Asset Management — Major data centre infrastructure investor with renewable energy integration focus
  • Digital Realty — Publicly traded data centre REIT with sustainability commitments across global portfolio
  • QTS Realty Trust — Data centre operator with 100% renewable energy commitment, now part of Blackstone
  • DigitalBridge — Infrastructure investment firm with data centre and fibre assets across multiple regions

Action Checklist

  • Request location-specific carbon intensity data (gCO₂e/kWh) from cloud providers rather than accepting corporate averages
  • Evaluate renewable energy matching methodology—hourly matching delivers substantially lower emissions than annual averaging
  • Include Scope 3 cloud computing emissions in carbon accounting with appropriate uncertainty ranges
  • Assess water usage effectiveness (WUE) for data centres in water-stressed regions
  • Benchmark AI workload energy consumption against capability gains—not all training runs are created equal
  • Consider geographic diversification to reduce concentration risk in grid-strained regions
  • Negotiate sustainability reporting requirements in cloud procurement contracts
  • Monitor regulatory developments (EU Energy Efficiency Directive data centre provisions, UK net zero commitments)

FAQ

Q: Will efficiency improvements solve the data centre energy problem? A: No. While chip efficiency doubles every 2.5–3 years, compute demand is growing 4–5× faster annually through 2030. Total server energy tripled from 2014 to 2023 despite massive efficiency gains. The arithmetic means absolute consumption continues rising even as per-operation efficiency improves. Procurement cannot rely on efficiency alone; demand management and renewable integration are necessary complements.

Q: How should procurement evaluate cloud provider sustainability claims? A: Request four data points: (1) Location-specific carbon intensity for the regions hosting your workloads, not corporate averages; (2) Renewable energy matching methodology—hourly matching is superior to annual averaging; (3) PUE for specific facilities; (4) Scope 3 methodology and data quality indicators. Providers offering only aggregated corporate statistics are likely obscuring variation across facilities.

Q: What is the actual carbon footprint of AI model training? A: Training a large language model like GPT-4 consumes an estimated 50–100 GWh, equivalent to 10,000–20,000 tonnes CO₂e depending on grid carbon intensity. Inference (using trained models) increasingly dominates total AI energy consumption as models deploy at scale. A single ChatGPT query consumes approximately 0.3–0.34 Wh (Epoch AI, 2025). At billions of daily queries, inference energy now exceeds training energy for deployed systems.

Q: How do UK grid carbon intensity and data centre locations affect Scope 2 emissions? A: UK grid carbon intensity averaged 180–220 gCO₂e/kWh in 2024, varying significantly by time of day and season. Data centres operating in regions with higher renewable penetration (Scotland) achieve lower carbon intensity than those in southeastern England. For organisations with flexibility in workload placement, geographic optimisation can reduce Scope 2 emissions by 20–40%.

Q: What regulatory developments should procurement professionals monitor? A: The EU Energy Efficiency Directive includes specific data centre reporting requirements effective 2024, requiring disclosure of energy consumption, PUE, and renewable percentage. The UK government is developing similar frameworks aligned with net zero commitments. Corporate sustainability reporting standards (CSRD in EU, ISSB globally) increasingly require Scope 3 disclosure including cloud computing. Procurement contracts should anticipate these requirements with appropriate data access provisions.

Sources

Related Articles