The European Union lawmakers granted ultimate approval to the artificial intelligence law of the 27-nation bloc, setting the stage for the world-leading regulations to become effective later this year.
Following five years since the initial proposal, lawmakers in the European Parliament voted in favor of the Artificial Intelligence Act. This legislation is anticipated to serve as a guiding framework for other governments worldwide as they navigate the regulation of rapidly advancing technology.
Before the vote, Dragos Tudorache, a Romanian lawmaker who played a key role in the Parliament negotiations on the draft law, emphasized that the AI Act has steered the future of AI towards a human-centric approach. He highlighted that humans are empowered to control the technology, which in turn assists in facilitating discoveries, economic growth, societal advancements, and unlocking human potential.
Major technology firms typically advocate for AI regulation while simultaneously lobbying to shape rules in their favor. Last year, OpenAI CEO Sam Altman stirred controversy by implying that the ChatGPT maker might withdraw from Europe if it couldn’t adhere to the AI Act—although he later clarified that there were no actual plans to exit.
More about the AI Act
Similar to many EU regulations, the AI Act operates as consumer safety legislation, employing a “risk-based approach” to products or services utilizing artificial intelligence.
The AI Act applies a risk-based approach, with low-risk AI systems subject to voluntary requirements, while high-risk applications, such as medical devices, face stringent regulations including high-quality data usage and transparent user information provision.
Prohibited uses include social scoring systems, certain predictive policing methods, and emotion recognition systems in educational and professional settings. Additionally, police use of AI-powered remote biometric identification systems is restricted to serious crimes like kidnapping or terrorism.
Regulating Generative AI
The AI Act expands its scope to include generative AI models like OpenAI’s ChatGPT, requiring developers to disclose training data sources and comply with EU copyright law.
Deepfake content must be labeled as manipulated, with heightened scrutiny for large-scale models like OpenAI’s GPT4 and Google’s Gemini due to potential systemic risks and biases. Companies must assess and mitigate risks, report incidents, implement cybersecurity measures, and disclose energy consumption. EU aims to prevent accidents, cyberattacks, and harmful biases stemming from powerful AI systems, emphasizing transparency, accountability, and responsible deployment.
How do Europe’s regulations impact the global landscape
Brussels initiated AI regulations in 2019, setting a global trend for increased scrutiny of emerging industries.
In the U.S., President Biden signed an executive order on AI, with plans for legislation and global agreements. Additionally, at least seven U.S. states are developing their own AI laws.
Chinese President Xi Jinping proposed the Global AI Governance Initiative, and interim measures have been introduced for managing generative AI within China.
Other countries, including Brazil and Japan, along with organizations like the United Nations and G7, are also implementing AI regulations to establish ethical frameworks and guidelines.
Insights into the AI Act: Timeline, Regulations, and Future Directions
The AI Act is anticipated to become law by May or June, pending final formalities and approval from EU member countries.
Implementation will occur in stages, with prohibited AI systems banned six months after the law’s enactment, and regulations for general-purpose AI, like chatbots, taking effect a year later. Enforcement of the complete regulatory framework, including measures for high-risk systems, is slated for mid-2026.
Each EU country will establish its own AI watchdog for enforcement, while Brussels will establish an AI Office for supervising general-purpose AI systems. Violations may incur fines of up to 35 million euros or 7% of a company’s global revenue.
Italian lawmaker Brando Benifei, involved in drafting the law, suggests that more AI-related legislation could follow, particularly after upcoming summer elections, potentially addressing areas like AI in the workplace.