Authorities around the globe are rushing to reign in artificial intelligence, including in the European Union, where ground-breaking legislation is expected to clear a crucial hurdle on Wednesday.
European Parliament legislators are scheduled to deliberate on the proposal, which includes contentious amendments regarding facial recognition, as it moves closer to approval.
Rapid advances in chatbots such as ChatGPT demonstrate the potential benefits of the emergent technology, as well as the new dangers it poses, adding urgency to Brussels’ multi-year effort to establish AI safeguards.
How do the rules function?
The standard, which was initially proposed in 2021, will govern any product or service that employs an AI system. The act will categorize AI systems into four risk levels, ranging from minimal to untenable.
Riskier applications, such as those for employment or technology aimed at minors, will be subject to stricter requirements, including greater data accuracy and transparency.
Infractions will result in sanctions of up to 30 million euros ($33 million) or 6% of a company’s annual global revenue, which in the case of Google and Microsoft could equate to billions of dollars.
The 27 EU member states will be responsible for enforcing the regulations.
What are the dangers?
One of the primary objectives of the EU is to prevent AI from posing health and safety risks and to safeguard fundamental rights and values.
Thus, certain AI applications, such as “social scoring” systems that evaluate individuals based on their behavior, are strictly prohibited.
Also prohibited is AI that exploits vulnerable people, including children, or employs subliminal manipulation that can cause harm, such as an interactive talking device that promotes risky behavior.
Also prohibited are predictive policing tools, which analyze data to predict who will commit offenses.
Legislators strengthened the original proposal from the European Commission, the EU’s executive branch, by expanding the public prohibition on biometric identification and remote facial recognition. The technology examines pedestrians and uses artificial intelligence to match their features or other physical characteristics to a database.
However, it confronts a last-minute challenge after a center-right party added an amendment allowing exceptions for law enforcement purposes such as locating missing children, identifying criminal suspects, and preventing terrorist threats.
“We do not desire mass surveillance, social scoring, or predictive policing in the European Union, period.” China does this, not us, said Dragos Tudorache, a Romanian member of the European Parliament who co-leads the organization’s work on the AI Act.
AI systems used in fields such as employment and education, which could affect the course of a person’s life, are subject to stringent requirements, such as being transparent with users and assessing and mitigating algorithmic bias.
The majority of AI systems, such as video games and spam filters, fell into the category of low or no danger, according to the commission.
What Reagrding ChatGPT?
The original measure scarcely mentioned chatbots, requiring only that they be designated so that users are aware that they are interacting with a machine. After ChatGPT’s meteoric rise in popularity, negotiators later added provisions to encompass general-purpose AI, subjecting it to some of the same requirements as high-risk systems.
The requirement to comprehensively document all copyright materials used to teach artificial intelligence systems how to generate text, images, video, and audio that resemble human work is a crucial addition.
This would inform content creators if their blog posts, digital books, scientific articles, or music were used to train the algorithms that fuel systems such as ChatGPT. Then they could determine if their work has been plagiarized and pursue compensation.
Why are EU Regulations So Important
The European Union is not a major participant in the development of cutting-edge AI. This function is played by the United States and China. But Brussels frequently serves as a trend-setter with regulations that tend to become global de facto standards.
According to experts, the mere scale of the EU’s single market, with 450 million consumers, makes it simpler for businesses to conform than to develop distinct products for different regions.
But this is not simply a repression. Brussels is also attempting to develop the market by imparting user confidence by establishing common AI rules.
“The fact that this regulation can be enforced and companies will be held liable is significant,” said Kris Shashak, a technologist and senior fellow at the Irish Council for Civil Liberties. Countries such as the United States, Singapore, and Britain have only provided “guidance and recommendations.”
“Other nations may wish to adapt and imitate” EU regulations, he said.
Others are making up ground. The United Kingdom, which will leave the European Union in 2020, is competing for AI leadership. This autumn, Prime Minister Rishi Sunak intends to host a global summit on AI safety.
“I want to make the United Kingdom not only the intellectual home of global AI safety regulation, but also its geographical home,” Sunak said at a tech conference this week.
The British summit will bring together “academia, business, and governments from around the world” to work on a “multilateral framework,” he said.
What Comes Next?
It may be years before the rules are thoroughly implemented. The vote will be followed by trilateral negotiations between member countries, the European Parliament, and the European Commission, which may result in additional wording adjustments as they attempt to reach an agreement.
The final approval is anticipated by the end of this year, followed by a typically two-year grace period for companies and organizations to adapt.
Europe and the United States are drafting a voluntary code of conduct that officials promised at the end of May would be formulated within a matter of weeks and could be expanded to other “like-minded countries.”