The Historic Vote That Changed AI Governance
On March 13, 2024, the European Parliament made history by adopting the EU AI Act with an overwhelming majority of 523 votes in favor, 46 against, and 49 abstentions. According to the EU Parliament’s press release, this marked the creation of “the world’s first comprehensive legal framework for artificial intelligence,” establishing a regulatory model that positioned Europe as the global leader in AI governance.
The vote represented the culmination of nearly three years of legislative work, beginning with the European Commission’s original proposal in April 2021. By mid-March 2024, the Act had become reality, setting a precedent that would influence AI regulation discussions worldwide.
The Risk-Based Framework
At the heart of the EU AI Act was a tiered, risk-based approach that categorized AI systems into four distinct levels: minimal, limited, high, and unacceptable risk. This structure aimed to balance innovation with protection of fundamental rights.
The legislation drew particularly clear lines around unacceptable risk AI applications. According to the adopted text, the Act banned social scoring systems by governments and real-time biometric identification in publicly accessible spaces, though law enforcement exceptions were carved out for specific circumstances such as preventing terrorist attacks or locating missing children.
For high-risk AI systems—including those used in critical infrastructure, education, employment, and law enforcement—the Act imposed strict requirements. These systems faced mandatory risk assessments, transparency obligations, human oversight requirements, and detailed documentation standards before deployment.
Foundation Models Under the Microscope
The Act represented one of the first major regulatory frameworks to specifically address foundation models and general-purpose AI systems, a category that had gained prominence with the rise of large language models in 2022 and 2023. According to the EU Parliament’s documentation, providers of these systems faced distinct transparency requirements, including disclosure of training data details and compliance with EU copyright law.
General-purpose AI systems classified as presenting systemic risks—those trained with computing power exceeding 10^25 floating-point operations—faced additional obligations, including model evaluations, systemic risk assessments, and incident reporting requirements.
Enforcement Teeth
The EU AI Act backed its requirements with substantial penalties. Organizations violating the Act’s provisions faced fines reaching up to €35 million or 7% of global annual turnover, whichever proved higher. This enforcement mechanism echoed the penalty structure of the General Data Protection Regulation (GDPR), which had demonstrated Europe’s willingness to levy significant fines against major technology companies.
The legislation established a phased implementation timeline spanning two years, with different provisions taking effect at different intervals. Bans on prohibited AI practices were set to apply earliest, within six months of the Act entering force, while requirements for high-risk systems would follow on a longer timeline.
Global Implications and Industry Response
In the week following the vote, the legislation dominated discussions in the AI policy community. The Act’s adoption came at a moment when AI governance debates were intensifying globally—the United States had issued its Executive Order on AI in October 2023, and countries including the United Kingdom, China, and Japan were developing their own approaches to AI regulation.
According to statements captured in EU Parliament materials during the March 13-20 period, policymakers emphasized the Act’s role in establishing “clear rules” for AI development while maintaining European competitiveness. The legislation’s extraterritorial reach meant that any AI system placed on the EU market—regardless of where its provider was based—would need to comply with the Act’s requirements.
For technology companies, the Act created new compliance obligations that would require significant operational adjustments. Organizations developing or deploying AI systems in Europe faced the prospect of adapting their practices to meet the Act’s transparency, documentation, and risk management requirements.
A Regulatory Template
By March 20, 2024, the EU AI Act stood as the most comprehensive attempt to regulate artificial intelligence through legislation. Its risk-based framework, specific treatment of foundation models, and substantial enforcement mechanisms provided a template that other jurisdictions would inevitably study as they developed their own approaches.
The Act represented a distinctly European approach to technology governance—one that prioritized fundamental rights and regulatory oversight while attempting to preserve space for innovation. Whether this model would prove effective remained to be seen in its implementation phase, but the March 13 vote had already secured Europe’s position as the first mover in comprehensive AI regulation, establishing principles and mechanisms that would shape the global conversation around AI governance for years to come.