SCI-TECH-AI | MADRID- EU takes a major step towards regulating AI
MADRID – The European Union took an important step Wednesday toward passing what would be one of the first major laws to regulate artificial intelligence, a potential model for policymakers around the world as they grapple with how to put guardrails on the rapidly developing technology.
The European Parliament, a main legislative branch of the EU, passed a draft law known as the AI Act, which would put new restrictions on what are seen as the technology’s riskiest uses.
It would severely curtail uses of facial recognition software, while requiring makers of AI systems like the ChatGPT chatbot to disclose more about the data used to create their programmes.
The vote is one step in a longer process. A final version of the law is not expected to be passed until later this year.
The European Union is further along than the United States and other large Western governments in regulating AI.
The 27-nation bloc has debated the topic for more than two years, and the issue took on new urgency after last year’s release of ChatGPT, which intensified concerns about the technology’s potential effects on employment and society.
Policymakers everywhere from Washington to Beijing are now racing to control an evolving technology that is alarming even some of its earliest creators.
In the United States, the White House has released policy ideas that include rules for testing AI systems before they are publicly available and protecting privacy rights. In China, draft rules unveiled in April would require makers of chatbots to adhere to the country’s strict censorship rules. Beijing is also taking more control over the ways makers of AI systems use data.
How effective any regulation of AI can be is unclear. In a sign of the technology’s new capabilities emerging seemingly faster than lawmakers are able to address them, earlier versions of the EU law did not give much attention to so-called generative AI systems like ChatGPT, which can produce text, images and video in response to prompts.
In the latest version of Europe’s Bill passed Wednesday, generative AI would face new transparency requirements.
That includes publishing summaries of copyrighted material used for training the system, a proposal supported by the publishing industry but opposed by tech developers as technically infeasible. Makers of generative AI systems would also have to put safeguards in place to prevent them from generating illegal content.
Ms Francine Bennett, acting director of the Ada Lovelace Institute, an organisation in London that has pushed for new AI laws, said the EU proposal was an “important landmark.”
“Fast-moving and rapidly repurposable technology is of course hard to regulate, when not even the companies building the technology are completely clear on how things will play out,” Ms Bennett said. “But it would definitely be worse for us all to continue operating with no adequate regulation at all.”
The EU’s Bill takes a “risk-based” approach to regulating AI, focusing on applications with the greatest potential for human harm.
This would include where AI systems are used to operate critical infrastructure like water or energy, in the legal system, and when determining access to public services and government benefits.
Makers of the technology will have to conduct risk assessments before putting the tech into everyday use, akin to the drug approval process.
A tech industry group, the Computer & Communications Industry Association, said the EU should avoid overly broad regulations that inhibit innovation.
“The EU is set to become a leader in regulating artificial intelligence, but whether it will lead on AI innovation still remains to be seen,” said Mr Boniface de Champris, the group’s Europe policy manager. “Europe’s new AI rules need to effectively address clearly-defined risks, while leaving enough flexibility for developers to deliver useful AI applications to the benefit of all Europeans.”
One major area of debate is the use of facial recognition. The European Parliament voted to ban uses of live facial recognition, but questions remain about whether exemptions should be allowed for national security and other law enforcement purposes.
Another provision would ban companies from scraping biometric data from social media to build out databases, a practice that drew scrutiny after it was used by the facial-recognition company Clearview AI.
Tech leaders have been trying influence the debate.
Mr Sam Altman, the CEO of OpenAI, the maker of ChatGPT, has in recent months visited with at least 100 American lawmakers and other global policymakers in South America, Europe, Africa and Asia, including Ms Ursula von der Leyen, president of the European Commission.
Mr Altman has called for regulation of AI, but has also said the EU’s proposal may be prohibitively difficult to comply with.
After the vote Wednesday, a final version of the law will be negotiated between representatives of the three branches of the EU – the European Parliament, European Commission and the Council of the European Union. Officials said they hope to reach a final agreement by the end of the year. NYTIMES