“The rules of the road for the most powerful technology humanity has built — still being written while the cars are already moving.” An AI governance framework is the constellation of laws, regulations, technical standards, and international norms that govern how artificial intelligence systems are developed, deployed, audited, and held accountable.
Executive Summary
AI governance has become one of the defining geopolitical contests of the 2020s. Three distinct regulatory philosophies are competing for global influence: the EU’s risk-based rights-protective model (EU AI Act); the U.S. market-led innovation-first model (increasingly assertive under the Trump administration, which revoked Biden’s AI Executive Order in January 2025); and China’s state-centric model that mandates algorithmic compliance with “core socialist values” and prioritizes state access to AI systems. Which model becomes the global standard — through the “Brussels Effect” of EU regulatory gravity, U.S. market dominance, or Chinese influence in the Global South — will shape the architecture of the AI economy for decades.
The Strategic Mechanism
- EU AI Act (effective 2024–2026 phased): The world’s first comprehensive AI law, classifying AI systems by risk level — unacceptable (prohibited), high-risk (regulated), limited risk, minimal risk. High-risk AI (used in healthcare, education, employment, critical infrastructure, law enforcement) requires conformity assessments, transparency obligations, human oversight requirements, and registration in an EU database.
- U.S. approach: The Biden Executive Order on AI (October 2023) established safety evaluation requirements for frontier AI models, directed NIST to develop standards, and required export controls on AI chips. Trump revoked the EO in January 2025, issuing a new “American AI Dominance” order emphasizing removing regulatory barriers. NIST’s AI Risk Management Framework remains influential as a voluntary standard.
- China’s model: A series of regulations (Algorithm Recommendation Regulations 2022, Deep Synthesis Regulations 2022, Generative AI Regulations 2023) require AI providers to register systems, conduct security assessments, and ensure content aligns with “core socialist values” — creating a governance system that enables state oversight while restricting foreign AI deployment.
- International standards competition: The ISO/IEC 42001 AI Management System standard, IEEE AI ethics standards, and ITU AI working groups are all arenas where the U.S., EU, China, and others compete to embed their governance philosophies into technical standards adopted globally.
- Frontier AI safety: The UK AI Safety Summit (Bletchley Park, November 2023) and its Seoul and Paris successors established an international dialogue on frontier AI risks — with the Bletchley Declaration signed by 28 nations including the U.S., EU, and China — representing the broadest international AI safety consensus achieved to date.
Market & Policy Impact
- The EU AI Act’s prohibited AI category — including real-time biometric surveillance in public spaces, social scoring systems, and AI that exploits psychological vulnerabilities — directly conflicts with Chinese AI deployment norms, creating market incompatibility between EU and Chinese AI ecosystems.
- High-risk AI Act compliance costs are estimated at €6,000–€7,500 per conformity assessment for smaller AI providers — creating barriers that favor large incumbent players and may constrain European AI startup competitiveness.
- The Trump administration’s “AI Dominance” framing has explicitly positioned U.S. AI governance as a national competitiveness instrument, prioritizing speed and scale over precautionary regulation and creating transatlantic governance divergence that complicates allied AI supply chain cooperation.
- Export controls on advanced AI chips (H100, H200, Blackwell architecture) have been the most operationally impactful AI governance instrument, directly limiting Chinese frontier AI model development by constraining compute access.
- “Regulatory arbitrage” is emerging as a live concern: AI developers facing EU AI Act compliance costs may route high-risk deployments through non-EU jurisdictions, limiting the Act’s practical reach.
Modern Case Study: DeepSeek and the Governance Stress Test (January 2025)
The January 2025 release of DeepSeek R1 — a Chinese-developed large language model claiming performance competitive with leading U.S. frontier models at a fraction of the compute cost — triggered a stress test across every dimension of AI governance simultaneously. For export control policy, it raised the question of whether chip restrictions were actually constraining Chinese AI development or merely incentivizing efficiency innovation. For competitive AI governance, it challenged the U.S. narrative that American AI dominance was secure. For the EU AI Act, it raised the question of whether Chinese AI models deployed by European businesses would be subject to Act compliance requirements — and how compliance would be verified for a model developed under an opaque Chinese regulatory regime. DeepSeek demonstrated that AI governance frameworks designed around current technology assumptions can be rapidly disrupted by innovation that crosses governance category lines.