Despite the criticism, this is good news for businesses and society. For businesses, it starts providing companies with a solid framework for the assessment and mitigation of risks, that — if unchecked — could hurt customers and curtail businesses’ ability to benefit from their investments in the technology. And for society, it helps protect people from potential, detrimental outcomes.”Enza Iannopollo, an analyst at Forrester, a research and advisory group
European Union policymakers on Friday agreed a provisional deal on landmark rules governing the use of artificial intelligence, including governments’ use of AI in biometric surveillance and how to regulate AI systems such as ChatGPT.
EU member states and lawmakers clinched a deal how to draft “historic” rules regulating artificial intelligence models such as ChatGPT — after 36 hours of negotiations.
Meeting in Brussels, negotiators nailed down curbs on how AI can be used in Europe, which they said would not hurt innovation in the sector nor the prospects for future European AI champions.
“Historic! With the political deal on the AI Act sealed today, the EU becomes the first continent to set clear rules for the use of AI,” declared the EU’s internal market commissioner, Thierry Breton.
The EU will be able to monitor and sanction those who violate the law through a new body called the EU AI office that will be attached to the commission.
The office will have the power to slap a fine worth seven percent of a company’s turnover or 35 million euros, whichever is larger.
The European Commission, the EU’s executive arm, first proposed the law in 2021 to regulate AI systems based on risk assessments of the software models.
The higher the risk to individuals’ rights or health, for example, the greater the systems’ obligations.
The law will still need to be formally approved by member states and the parliament, but Friday’s political agreement was seen as the last serious hurdle.
“The AI Act is a global first. A unique legal framework for the development of AI you can trust,” EU chief Ursula von der Leyen said in a social media post, welcoming the deal.
“And for the safety and fundamental rights of people and businesses. A commitment we took in our political guidelines – and we delivered. I welcome today’s political agreement.”
The agreement includes a two-tier approach, with transparency requirements for all general-purpose AI models and tougher requirements for the more powerful models.
Another sticking point had been over remote biometric surveillance — basically, facial identification through camera data in public places. Governments wanted exceptions for law enforcement and national security purposes.
While the agreement has a ban on real-time facial recognition, there will be a limited number of exemptions.
The “AI Act” has been rushed through the European Union’s legislative process this year after the chatbot ChatGPT, a mass-market gateway to generative AI, exploded onto the scene late 2022.
Although ChatGPT’s ability to create articulate essays and poems was a dizzying display of AI’s rapid advances, critics worry about how the technology can be misused.
Generative AI software, which also includes Google’s chatbot Bard, can quickly produce text, images and audio from simple commands in everyday language.
Other examples of generative AI include Dall-E, Midjourney and Stable Diffusion, which can create images in nearly any style on demand.
Negotiators initially failed to agree after marathon talks that began on Wednesday lasted 22 hours and ended with only a deal to resume talks the next day.
Here are some reaction to the news from key people and experts:
Daniel Friedlaender, head of CCIA Europe (a non-profit trade association for computer and communications industry): “Last night’s political deal marks the beginning of important and necessary technical work on crucial details of the AI Act, which are still missing. Regrettably, speed seems to have prevailed over quality, with potentially disastrous consequences for the European economy. The negative impact could be felt far beyond the AI sector alone.”
Dutch MEP Kim van Sparrentak, who worked closely on the draft AI rules: “Europe chooses its own path and does not follow the Chinese surveillance state.
After a huge battle with the EU countries, we have restricted the use of these types of systems. In a free and democratic society you should be able to walk on the street without the government constantly following you on the street, at festivals or in football stadiums.” Daniel Leufer, senior policy analyst at non-profit group, Access Now, which defends digital rights of people and communities at risk:
“Whatever the victories may have been in these final negotiations, the fact remains that huge flaws will remain in this final text: loopholes for law enforcement, lack of protection in the migration context, opt-outs for developers and big gaps in the bans on the most dangerous AI systems.” Daniel Castro, vice president of the Information Technology and Innovation Foundation (ITIF):
“Given how rapidly AI is developing, EU lawmakers should have hit pause on any legislation until they better understand what exactly it is they are regulating. There is likely an equal, if not greater, risk of unintended consequences from poorly conceived legislation than there is from poorly conceived technology. And unfortunately, fixing technology is usually much easier than fixing bad laws. The EU should focus on winning the innovation race, not the regulation race. AI promises to open a new wave of digital progress in all sectors of the economy. But it is not operating without constraints.
Existing laws and regulations apply, and it is still too soon to know exactly what new rules may be necessary. EU policymakers should re-read the tale of the tortoise and the hare. Acting quickly may give the illusion of progress, but it does not guarantee success.” Enza Iannopollo, an analyst at Forrester, a research and advisory group:
“Despite the criticism, this is good news for businesses and society. For businesses, it starts providing companies with a solid framework for the assessment and mitigation of risks, that — if unchecked — could hurt customers and curtail businesses’ ability to benefit from their investments in the technology. And for society, it helps protect people from potential, detrimental outcomes.”
Agencies