Retrospective: EU AI Act Adopted – A Global First in Comprehensive AI Regulation

On March 13, 2024, the European Parliament adopted the EU AI Act, establishing the world's first comprehensive legal framework for artificial intelligence.

The Dawn of a New Regulatory Era: EU Parliament Adopts AI Act

On March 13, 2024, the European Parliament made a historic move by formally adopting the EU Artificial Intelligence Act. This landmark vote established the world’s first comprehensive legal framework for artificial intelligence, positioning the European Union as a frontrunner in global AI governance. The decision, following an extensive legislative journey that began with a proposal in April 2021, marked a pivotal moment for the development and deployment of AI systems worldwide.

According to the EU Parliament Press Release, the vote saw a significant majority in favor, with 523 votes cast for the regulation, 46 against, and 49 abstentions. This decisive outcome underscored a broad consensus among MEPs regarding the necessity of a unified and robust approach to AI oversight. Dragoş Tudorache (Renew, Romania), co-rapporteur for the AI Act, reflected on the achievement, stating on March 13, 2024, that Europe had “championed global standards for a human-centric and trustworthy AI.” He emphasized the Act’s role in creating a “clear and predictable path for AI innovation, while protecting fundamental rights.” Co-rapporteur Brando Benifei (S&D, Italy) added that the Act would “be a benchmark for a fair and transparent AI worldwide.”

Core Pillars of the EU AI Act: A Risk-Based Framework

The adopted legislation introduced a pioneering risk-based approach to AI, categorizing systems into four levels: minimal, limited, high, and unacceptable risk. This tiered structure aimed to tailor regulatory burdens proportionally to the potential harm posed by AI applications. The most stringent regulations were designated for ‘unacceptable risk’ AI systems, which included explicit prohibitions.

Key prohibitions under the Act included, but were not limited to, certain applications of social scoring by governments or on behalf of governments, and real-time remote biometric identification systems in publicly accessible spaces. While the latter generally faced a ban, the Act did provide for strictly defined exceptions for law enforcement purposes, subject to prior judicial authorization and strict necessity and proportionality requirements, according to the EU Parliament Press Release. AI systems deemed ‘high-risk,’ such as those used in critical infrastructure, medical devices, employment, or law enforcement, would face rigorous obligations regarding data quality, human oversight, transparency, cybersecurity, and conformity assessments before they could be placed on the market.

For ‘limited risk’ AI systems, such as chatbots, the Act stipulated transparency requirements, compelling developers to inform users that they were interacting with an AI. ‘Minimal risk’ AI applications faced the fewest obligations, encouraging innovation in areas deemed to pose little threat to fundamental rights or safety.

Transparency for Foundation Models and Stiff Penalties

A significant addition during the legislative process was the inclusion of provisions for General-Purpose AI (GPAI) systems, particularly large foundation models. Providers of these powerful, versatile models were now subject to specific transparency requirements. They were mandated to assess and mitigate potential systemic risks, register their models in an EU database, and adhere to specific information and transparency obligations regarding the data used for training, as detailed in the EU Parliament Press Release and the EU AI Act Text.

To ensure compliance, the Act outlined substantial penalties for infringements. Non-compliance could result in fines of up to €35 million or 7% of a company’s global annual turnover, whichever was higher. This robust enforcement mechanism aimed to incentivize adherence to the new rules and safeguard citizens’ rights.

Implementation and Global Precedent

Following the European Parliament’s adoption, the Act still required a final formal endorsement from the European Council. Once formally approved and published in the Official Journal of the EU, the legislation was expected to enter into force 20 days later, though its provisions would apply progressively. The EU Parliament Press Release indicated a phased implementation approach, with the full application of the Act expected approximately two years after its entry into force.

In the immediate aftermath of the vote, the EU AI Act was widely recognized as setting a global precedent for AI governance. Experts and policymakers observed that its comprehensive nature and risk-based framework would likely influence regulatory discussions and legislative efforts in other jurisdictions around the world, including the United States, the UK, and countries in Asia. The adoption of the Act represented not just a regulatory achievement for the EU, but a foundational step in shaping the ethical and responsible development of artificial intelligence on an international scale.