“AI governance is not primarily a technical challenge it is a power question about who sets the rules for a technology that will reshape every sector of the global economy.” AI governance encompasses the laws, standards, norms, and institutional arrangements that guide the development, deployment, and oversight of artificial intelligence systems, operating across national, regional, and international levels simultaneously.
Executive Summary
AI governance has moved from an academic policy discussion to active legislative and regulatory implementation at speed rarely seen in technology governance history. The EU AI Act became law in August 2024. The US issued its Executive Order on AI in October 2023 and the National Security Memorandum on AI in October 2024. China enacted its Interim Measures for Generative AI Services in August 2023 and its AI Safety Governance Framework in 2024. The resulting landscape is a tripartite governance architecture EU rule-of-law, US market-led, China state-controlled whose incompatibility creates compliance burdens for multinational firms and fragmentation risks for the global AI ecosystem. Governance frameworks are now a primary arena for US-China strategic competition.
The Strategic Mechanism
- Risk-based regulation: The EU AI Act approach categorizes AI systems by risk level (unacceptable, high, limited, minimal) and imposes proportionate obligations. This framework is being adopted as a template by Canada, Brazil, and Southeast Asian regulators.
- Voluntary commitments and standards: The US approach has historically relied on voluntary industry commitments (the 2023 White House Voluntary AI Commitments), NIST standards development, and sector-specific agency regulation rather than comprehensive legislation.
- State-directed governance: China’s approach uses AI governance to simultaneously advance AI capability and ensure Communist Party oversight of AI outputs, with the CAC serving as the primary regulatory body for generative AI content.
- International standards bodies: The ISO/IEC JTC1 SC42 AI standards committee, the OECD AI Policy Observatory, and the Global Partnership on AI (GPAI) represent multilateral governance attempts that operate in parallel with national frameworks.
- Compute governance: Using the computational threshold required to train AI systems as a measurable proxy for capability, enabling governance interventions (notification requirements, safety evaluations) that do not require defining “capability” in advance.
Market & Policy Impact
- The EU AI Act imposes fines of up to 7% of global annual turnover for violations involving prohibited AI applications, creating significant compliance costs for US and Chinese firms operating in European markets.
- The US National Institute of Standards and Technology (NIST) AI Risk Management Framework (2023) has been adopted as a voluntary governance standard by hundreds of companies and referenced in regulatory guidance across sectors including healthcare (FDA) and finance (SEC).
- India’s approach announced in 2024 requires government approval before deploying AI tools that could be “unreliable,” representing a regulatory model distinct from both EU and US frameworks and relevant to other emerging market democracies.
- Singapore’s Model AI Governance Framework, first published in 2019 and updated in 2024, has become a reference document for Southeast Asian governments designing AI regulations, illustrating how small states can achieve governance influence disproportionate to their economic size.
- The UN AI Advisory Body’s 2024 report “Governing AI for Humanity” called for a new international AI governance body, but faced resistance from the US and China, both of whom prefer to shape governance through bilateral and standards-body channels rather than UN mechanisms.
Modern Case Study: The EU AI Act From Draft to Implementation, 2021-2025
The EU AI Act followed a trajectory that illustrates the speed at which AI governance has institutionalized. First proposed by the European Commission in April 2021, the draft did not include generative AI provisions. The ChatGPT breakthrough in late 2022 forced a legislative revision adding GPAI Model obligations (Articles 51-56), significantly expanding the Act’s scope and creating intense lobbying from US tech firms. The Act entered into force in August 2024, with a phased implementation timeline: prohibited applications banned in February 2025, high-risk system obligations taking effect in 2026-2027. The Act’s extraterritorial reach applying to any AI system deployed in the EU regardless of where it was developed means that every major US and Chinese AI developer must comply with European rules. This Brussels Effect in AI governance has made the EU the de facto setter of global baseline AI compliance standards, even as the US and China pursue different regulatory philosophies.