Cybersecurity & Digital Trust·12 min read··...

EU AI Act vs US AI executive order vs China AI regulation: governance frameworks compared

A head-to-head comparison of the EU AI Act, US executive order on AI safety, and China's AI regulations covering risk classifications, compliance requirements, enforcement mechanisms, and implications for global AI deployment.

Why It Matters

By 2026, more than 60 percent of the global population lives under some form of AI-specific regulation, yet no two major jurisdictions have chosen the same governance model (OECD, 2025). The EU AI Act entered full enforcement in August 2025 with fines reaching 35 million euros or 7 percent of global turnover. The United States issued Executive Order 14110 on AI safety in October 2023, followed by a patchwork of sector-specific guidance and a 2025 Congressional push for binding federal legislation. China, meanwhile, has enacted three overlapping regulations covering algorithmic recommendations, deep synthesis (deepfakes), and generative AI since 2022, enforced through the Cyberspace Administration of China (CAC). For any organization deploying AI across borders, whether for sustainability analytics, climate risk modeling, or supply chain optimization, understanding these frameworks is not optional. Compliance missteps can trigger market access bans, multimillion-euro penalties, and reputational damage. Stanford HAI's 2025 AI Index found that multinational enterprises now spend an average of $2.4 million annually on AI governance and compliance activities, a figure that has tripled since 2022 (Stanford HAI, 2025).

Key Concepts

Risk-based classification is the regulatory architecture used by the EU AI Act, which sorts AI systems into four tiers: unacceptable risk (banned), high risk (strict obligations), limited risk (transparency duties), and minimal risk (no requirements). The U.S. and China do not use identical tiers but apply differentiated rules by application domain.

Conformity assessment refers to the process by which an AI system is evaluated against regulatory requirements before deployment. Under the EU AI Act, high-risk systems must undergo third-party conformity assessments or self-assessments depending on the use case. China requires security assessments and algorithm filing for generative AI services before public release.

Algorithmic transparency encompasses obligations to disclose how AI systems make decisions. The EU mandates human-readable explanations for high-risk systems. China's algorithm recommendation regulation (effective March 2022) requires companies to disclose algorithmic logic to users and file algorithms with the CAC. The U.S. approach relies primarily on sector-specific disclosure rules rather than a horizontal mandate.

Extraterritorial reach describes whether a regulation applies to organizations outside the jurisdiction. The EU AI Act applies to any provider placing an AI system on the EU market regardless of where the company is headquartered. China's regulations apply to services offered within the PRC. U.S. executive orders bind federal agencies and contractors but lack direct extraterritorial enforcement.

Foundation model obligations are a newer regulatory layer targeting large-scale general-purpose AI systems. The EU AI Act imposes transparency and documentation requirements on all general-purpose AI models (GPAI) and additional obligations on models with systemic risk. China requires generative AI providers to conduct security assessments and obtain approval before launch. The U.S. EO 14110 required developers of dual-use foundation models to report safety test results to the Department of Commerce.

Head-to-Head Comparison

DimensionEU AI ActUS AI Executive Order & guidanceChina AI regulations
Legal statusBinding regulation; phased enforcement began February 2025, full application August 2025Executive order (non-statutory); sector agencies issuing binding guidance; federal legislation pendingThree binding regulations (algorithm recommendation, deep synthesis, generative AI) plus CAC enforcement notices
Risk classificationFour-tier risk pyramid (unacceptable, high, limited, minimal)No unified tier; risk assessed by sector regulators (FDA, NHTSA, FTC, SEC)Domain-specific; generative AI requires pre-launch security assessment; recommendation algorithms must be filed
ScopeAll AI systems placed on the EU marketFederal agencies and contractors directly; private sector via sector regulatorsAI services offered within mainland China
Enforcement bodyEU AI Office + national competent authoritiesNo single AI regulator; NIST, FTC, DOC, sector agenciesCyberspace Administration of China (CAC), Ministry of Science and Technology
Maximum penalties€35M or 7% of global annual turnover (whichever is higher)Varies by sector; FTC can impose penalties under existing consumer protection authorityService suspension, fines, and criminal liability for serious violations
Transparency requirementsMandatory for high-risk systems; GPAI models must publish training data summariesVoluntary frameworks (NIST AI RMF); sector-specific disclosure rulesAlgorithm filing with CAC; user-facing disclosure of AI-generated content
Foundation model rulesGPAI transparency obligations; systemic-risk models face red-team testing and incident reportingDual-use foundation model developers must report safety tests to DOCGenerative AI providers must pass security assessment before public release
Extraterritorial reachYes, applies to non-EU providers serving the EU marketLimited; binds federal procurement and export controlsApplies to services accessible by PRC users
Effective datePhased: Feb 2025 (prohibitions), Aug 2025 (high-risk obligations), Aug 2027 (embedded AI)EO signed Oct 2023; ongoing agency rulemaking through 2025-2026Algorithm regulation Mar 2022; deep synthesis Jan 2023; generative AI Aug 2023

Cost Analysis

EU AI Act compliance. The European Commission's impact assessment estimated that high-risk AI system providers would face one-time compliance costs of €6,000 to €7,000 for smaller systems and up to €320,000 for complex systems requiring third-party conformity assessments (European Commission, 2024). PwC's 2025 survey of 200 European enterprises found average annual compliance spending of €1.8 million per company for firms with more than five high-risk AI applications, covering documentation, testing, monitoring, and legal review.

U.S. compliance costs. Without a single horizontal statute, costs are fragmented. Companies complying with NIST AI Risk Management Framework guidance, sector-specific FDA premarket requirements for AI-enabled medical devices, and federal procurement AI standards reported average governance spending of $2.1 million annually (Stanford HAI, 2025). Financial services firms subject to SEC and CFPB algorithmic fairness expectations face the highest costs, averaging $3.5 million per year according to Deloitte (2025).

China compliance costs. Compliance costs are structurally lower in absolute terms but operationally intensive. Algorithm filing and security assessments cost approximately ¥200,000 to ¥500,000 ($28,000 to $70,000) per service, but the process requires extensive data localization infrastructure and content moderation systems. ByteDance disclosed spending over $400 million on content moderation and AI compliance in 2024, though this includes both regulatory and platform-policy costs (Reuters, 2025).

Cross-jurisdictional multiplier. For a company deploying AI services in all three jurisdictions, compliance costs do not simply add up; they multiply. Differing documentation formats, testing standards, and filing requirements create duplicative workflows. McKinsey (2025) estimated that multinational AI compliance costs are 2.3 to 2.8 times higher than single-jurisdiction compliance for companies operating in the EU, U.S., and China simultaneously.

Use Cases and Best Fit

EU-headquartered sustainability platforms. Companies like SAP and Siemens that embed AI in enterprise sustainability tools must comply with high-risk classification rules where AI drives decisions affecting employment, creditworthiness, or critical infrastructure. The EU AI Act's clear taxonomy provides regulatory certainty but demands rigorous documentation.

U.S. climate-tech startups. American AI startups building climate risk models or energy optimization tools face lighter federal requirements today but must prepare for state-level legislation (Colorado's SB 205 on algorithmic discrimination took effect in 2026) and potential federal AI legislation. The NIST AI RMF provides a voluntary but increasingly expected compliance baseline.

Global AI service providers. Companies such as Google DeepMind and Microsoft Azure AI that serve customers worldwide must build compliance architectures capable of satisfying all three frameworks simultaneously. Google's 2025 AI Responsibility Report detailed a unified internal governance framework mapped to EU, U.S., and Chinese requirements, with regional compliance teams adapting outputs for local filing obligations (Google, 2025).

Chinese generative AI providers. Baidu (Ernie Bot), Alibaba (Tongyi Qianwen), and other domestic providers must complete CAC security assessments and maintain real-time content filtering. These requirements shape product design choices, including built-in safety classifiers and output watermarking, that may offer competitive advantages when expanding into regulated markets abroad.

Decision Framework

  1. Map your AI inventory. Catalog every AI system by function, data inputs, and deployment geography. Identify which systems qualify as high-risk under the EU AI Act, require algorithm filing in China, or fall under U.S. sector-specific oversight.

  2. Assess jurisdictional exposure. Determine where AI outputs reach end users. If an AI system serves EU residents, the EU AI Act applies regardless of company headquarters. If it processes PRC user data, Chinese regulations apply.

  3. Adopt the strictest baseline. Companies operating across all three jurisdictions often find it most efficient to build to EU AI Act standards as the most prescriptive framework, then layer on China-specific filing and content moderation requirements and U.S. sector-specific disclosures.

  4. Invest in documentation infrastructure. All three frameworks require extensive documentation: technical specifications, training data provenance, risk assessments, and monitoring logs. Centralizing this in a governance platform reduces duplication.

  5. Establish human oversight mechanisms. The EU AI Act requires meaningful human oversight for high-risk systems. China mandates human review of AI-generated content in certain categories. Building human-in-the-loop processes that satisfy both regimes avoids parallel workflows.

  6. Monitor regulatory evolution. The U.S. federal AI legislation landscape is shifting rapidly. The EU AI Act's delegated acts on high-risk system criteria are expected through 2027. China continues to issue supplementary notices. Assign a dedicated regulatory monitoring function or engage specialized counsel.

Key Players

Established Leaders

  • European Commission AI Office — Central coordination body for EU AI Act implementation, overseeing GPAI model compliance and cross-border enforcement.
  • NIST (U.S. National Institute of Standards and Technology) — Published the AI Risk Management Framework (AI RMF 1.0) and leads U.S. AI safety standards development.
  • Cyberspace Administration of China (CAC) — Primary enforcement body for China's algorithm recommendation, deep synthesis, and generative AI regulations.
  • OECD AI Policy Observatory — Tracks AI governance developments across 46 countries and maintains the OECD AI Principles adopted by over 50 nations.

Emerging Startups

  • Holistic AI — London-based platform providing automated AI risk assessments, bias auditing, and EU AI Act compliance mapping for enterprises.
  • Credo AI — U.S. startup offering AI governance software that maps AI systems to regulatory requirements across jurisdictions.
  • TrailMark — Compliance automation platform specializing in AI documentation, conformity assessment preparation, and regulatory change tracking.

Key Investors/Funders

  • Horizon Europe — EU research funding program allocating over €1 billion to trustworthy AI research and governance tooling through 2027.
  • National Science Foundation (U.S.) — Funds AI safety, fairness, and accountability research through the National AI Research Institutes program.
  • SoftBank Vision Fund — Invested in AI governance startups including Credo AI, signaling institutional appetite for compliance infrastructure.

FAQ

Which framework is the most restrictive for AI developers? The EU AI Act is the most prescriptive, with binding obligations across a defined risk taxonomy, mandatory conformity assessments for high-risk systems, and the highest financial penalties (up to 7 percent of global turnover). China's regulations are operationally demanding due to pre-launch security assessments and content moderation requirements, but penalties are less clearly quantified. The U.S. currently relies on voluntary frameworks and sector-specific rules, making it the least restrictive at the federal level, though state legislation is closing this gap.

Can a company comply with all three frameworks using a single governance process? In principle, yes, and this is the approach recommended by McKinsey (2025) and adopted by firms like Google and Microsoft. The practical strategy is to build to EU AI Act standards as the highest common denominator, then add China-specific algorithm filing, content moderation, and data localization layers, and U.S. sector-specific documentation. However, differences in risk classification definitions, transparency requirements, and enforcement timelines mean some jurisdiction-specific workflows remain unavoidable.

How does the EU AI Act affect non-EU companies? Any company that places an AI system on the EU market or whose AI system's output is used in the EU falls within scope. This means a U.S. SaaS company whose climate analytics tool is used by a European client must comply with applicable EU AI Act provisions. Non-compliance can result in market access restrictions and fines enforced through national competent authorities.

What happens if U.S. federal AI legislation passes? Multiple bills were introduced in the 2025-2026 Congressional session, including proposals for a federal AI licensing regime and algorithmic accountability requirements. If enacted, federal legislation would likely preempt some state laws and create a more unified U.S. framework. Until then, companies must navigate a patchwork of state laws (Colorado, California, Illinois) alongside federal agency guidance.

How do these frameworks address AI in sustainability and climate applications? None of the three frameworks create sustainability-specific AI rules, but climate and environmental AI tools can fall under broader categories. EU AI Act Annex III classifies AI systems used in critical infrastructure management (including energy grids) and environmental monitoring as potentially high-risk. China's regulations apply to any AI service with public-facing outputs regardless of domain. The U.S. DOE has issued separate guidance on AI for energy systems that complements the NIST AI RMF.

Sources

  • OECD. (2025). OECD AI Policy Observatory: Global AI Governance Tracker 2025. Organisation for Economic Co-operation and Development.
  • Stanford HAI. (2025). AI Index Report 2025: Measuring Trends in AI Governance and Compliance Costs. Stanford University Human-Centered AI Institute.
  • European Commission. (2024). EU AI Act Impact Assessment: Compliance Cost Estimates for High-Risk AI Systems. European Commission.
  • McKinsey & Company. (2025). Navigating the Global AI Regulatory Landscape: Cross-Jurisdictional Compliance Strategies. McKinsey Digital.
  • Deloitte. (2025). AI Governance in Financial Services: Compliance Costs and Organizational Readiness Survey. Deloitte Center for Regulatory Strategy.
  • Reuters. (2025). ByteDance AI Compliance and Content Moderation Spending Disclosures. Reuters.
  • Google. (2025). AI Responsibility Report 2025: Unified Governance Framework for Multi-Jurisdictional Compliance. Google DeepMind.
  • PwC. (2025). EU AI Act Readiness Survey: Enterprise Compliance Spending and Implementation Progress. PwC Europe.

Stay in the loop

Get monthly sustainability insights — no spam, just signal.

We respect your privacy. Unsubscribe anytime. Privacy Policy

Article

Trend analysis: AI governance & algorithmic accountability — where the value pools are (and who captures them)

Strategic analysis of value creation and capture in AI governance & algorithmic accountability, mapping where economic returns concentrate and which players are best positioned to benefit.

Read →
Article

AI governance and algorithmic accountability: where the regulatory and market momentum is heading

A trend analysis examining the trajectory of AI governance regulation and algorithmic accountability requirements, covering emerging standards, enforcement patterns, market growth for governance tools, and implications for AI deployment.

Read →
Deep Dive

AI governance and algorithmic accountability: the hidden trade-offs and how to manage them

An in-depth analysis of the trade-offs between AI governance requirements, model performance, and deployment speed, exploring how organizations balance accountability with innovation velocity and competitive pressure.

Read →
Deep Dive

Deep dive: AI governance & algorithmic accountability — the fastest-moving subsegments to watch

An in-depth analysis of the most dynamic subsegments within AI governance & algorithmic accountability, tracking where momentum is building, capital is flowing, and breakthroughs are emerging.

Read →
Deep Dive

Deep dive: AI governance & algorithmic accountability — what's working, what's not, and what's next

A comprehensive state-of-play assessment for AI governance & algorithmic accountability, evaluating current successes, persistent challenges, and the most promising near-term developments.

Read →
Explainer

Explainer: AI governance & algorithmic accountability — what it is, why it matters, and how to evaluate options

A practical primer on AI governance & algorithmic accountability covering key concepts, decision frameworks, and evaluation criteria for sustainability professionals and teams exploring this space.

Read →