EU AI Act

“The EU AI Act is doing to artificial intelligence what GDPR did to data setting global baseline standards whether or not other regulators follow.” The EU AI Act is the world’s first comprehensive binding legal framework for artificial intelligence, establishing a tiered risk classification system for AI applications and imposing corresponding obligations on developers and deployers operating within or targeting the European market.

Executive Summary

The EU AI Act entered into force on August 1, 2024, following a three-year legislative process that had to incorporate generative AI after ChatGPT’s release fundamentally changed the technology landscape mid-negotiation. The Act establishes four risk tiers for AI applications unacceptable risk (banned), high-risk (regulated), limited risk (transparency obligations), and minimal risk (unregulated) and adds a separate GPAI (General Purpose AI) Model track for foundation models. Its extraterritorial reach applies to any AI system deployed in the EU regardless of developer location, effectively making it the de facto global compliance floor. Fines of up to 7% of global annual revenue for prohibited applications give the Act substantial enforcement weight.

The Strategic Mechanism

  • Prohibited applications (effective February 2025): Real-time biometric surveillance in public spaces (with national security exceptions), social scoring by public authorities, manipulation of vulnerable individuals, and AI-based profiling for law enforcement based solely on personal characteristics.
  • High-risk applications (phased from 2025-2027): AI systems in critical infrastructure, education, employment, essential services, law enforcement, border control, and administration of justice. Requires conformity assessments, human oversight, technical documentation, and EU database registration.
  • GPAI Model obligations (effective August 2025): Foundation model providers must publish technical documentation, comply with EU copyright law, and implement adversarial testing. Models trained above 10^25 FLOPs must conduct systemic risk assessments and report serious incidents to the European AI Office.
  • European AI Office: New EU body established within the European Commission to supervise GPAI model obligations, coordinate national enforcement, and issue guidance on implementation.
  • Brussels Effect mechanism: By requiring compliance as a condition of EU market access, the Act exports its standards globally. US and Chinese developers cannot maintain separate “EU versions” of frontier models at acceptable cost, driving global compliance toward EU standards.

Market & Policy Impact

  • European Commission analysis estimated the AI Act would impose compliance costs of EUR 6,000-EUR 7,400 per system for high-risk AI deployers significant for SMEs but manageable for major tech firms relative to market access value.
  • OpenAI, Google, and Meta each engaged in extensive lobbying during 2023 negotiations, successfully moderating some GPAI obligations but failing to prevent mandatory systemic risk assessments for the most capable models.
  • China’s state AI developers (Baidu, Alibaba, Huawei) face a structural compliance challenge: EU requirements for transparency, human oversight, and freedom from political content controls are architecturally incompatible with Chinese AI regulatory requirements for political content filtering.
  • The AI Liability Directive a parallel EU proposal would create civil liability mechanisms for AI harm, complementing the Act’s administrative enforcement with private litigation rights that do not exist under current law.
  • Brazil, Canada, India, and Australia have all cited the EU AI Act as a reference framework for national AI legislation, confirming the Brussels Effect dynamic.

Modern Case Study: GPAI Provisions and the Negotiation Fight, 2023-2024

The original 2021 EU AI Act proposal did not include foundation model provisions. ChatGPT’s November 2022 release forced the European Parliament to add GPAI Model obligations in a late-stage amendment process that created intense transatlantic and industry friction. Anthropic, OpenAI, and Google DeepMind argued that blanket GPAI requirements would stifle research and create impractical compliance burdens. France’s President Macron and Germany’s Chancellor Scholz initially backed a lighter-touch approach, concerned about European AI competitiveness. The final compromise distinguishing between general GPAI models and those above the 10^25 FLOPs “systemic risk” threshold represented a significant political concession to industry. The negotiations exposed a fundamental tension in AI governance globally: how to regulate capability that outpaces legislative processes. The GPAI framework, imperfect as it is, represents the most advanced attempt by any jurisdiction to govern foundation model development through binding law.