The European Parliament gave the green light to the flagship artificial intelligence law Wednesday, also dubbed the "world's first," paving the way for its final adoption by EU member states.
The lawmakers of the 27-nations bloc voted overwhelmingly in favor of the Artificial Intelligence Act five years after regulations were first proposed. The AI Act is expected to act as a global signpost for other governments grappling with how to regulate the fast-developing technology.
After negotiations with member states in December 2023, the AI Act was adopted by MEPs with 523 votes in favor, 46 against and 49 abstentions.
The EP explained in a statement that the regulation "aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI while boosting innovation and establishing Europe as a leader in the field."
It also establishes obligations for AI based on possible risks, including "limits on the use of biometric identification systems by law enforcement," as well as "bans on social scoring and AI used to manipulate or exploit user vulnerabilities."
General-purpose AI (GPAI) systems are required to meet certain transparency criteria, and "artificial or manipulated images, audio or video content ('deepfakes') need to be clearly labeled as such," the statement noted.
"I welcome the overwhelming support from the European Parliament for the EU AI Act, the world's first comprehensive, binding framework for trustworthy AI. Europe is now a global standard-setter in trustworthy AI," EU industry chief Thierry Breton said.
EU countries are set to give their formal nod to the deal in May, with the legislation expected to enter into force early next year and apply in 2026, although some of the provisions will kick in earlier.
Brussels may have set the benchmark for the rest of the world, said Patrick Van Eecke, a partner at law firm Cooley.
"The EU now has the world's first hard-coded AI law. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR," he said, referring to the EU privacy regulation.
However, he said the downside for companies is considerable red tape.
The European Parliament and EU countries clinched a preliminary deal in December after nearly 40 hours of negotiations on issues such as governments' use of biometric surveillance and how to regulate foundation models of generative AI such as ChatGPT.
Depending on the type of violation, companies risk fines ranging from 7.5 million euros ($8.2 million) or 1.5% of turnover to 35 million euros or 7% of global turnover.
The AI Act is expected to officially become law by May or June after a few final formalities, including a blessing from EU member countries. Provisions will start taking effect in stages, with countries required to ban prohibited AI systems six months after the rules enter the lawbooks.
Rules for general-purpose AI systems like chatbots will start applying a year after the law takes effect. By mid-2026, the complete set of regulations, including requirements for high-risk systems, will be in force.
When it comes to enforcement, each EU country will set up its own AI watchdog, where citizens can file a complaint if they think they've been the victim of a rule violation. Meanwhile, Brussels would create an AI Office tasked with enforcing and supervising the law for general-purpose AI systems.
This isn't Brussels' last word on AI rules, said Italian lawmaker Brando Benifei, co-leader of Parliament's work on the law. He said that more AI-related legislation could be passed after the summer elections, including in areas like AI in the workplace, which the new law partly covers.