Governments govern AI tools 2023

Rapid advances in artificial intelligence (AI) such as OpenAI’s ChatGPT, which is supported by Microsoft, make it difficult for governments to concur on laws governing the use of the technology.

Here are the most recent measures taken by national and international governing authorities to regulate AI tools: AUSTRALIA

  • Seeking feedback on rules A spokesperson for the minister of industry and science stated in April that the government is consulting Australia’s primary science advisory body and is weighing its next actions.

BRITAIN * Planning regulations

The British competition regulator announced on May 4 that it would examine the impact of artificial intelligence on consumers, businesses, and the economy to determine whether new regulations are required. Instead of establishing a new body, the United Kingdom announced in March that it would divide responsibility for regulating artificial intelligence among its regulators for human rights, health and safety, and competition.

  • Planning Regulations in CHINA

China’s cyberspace regulator unveiled proposed measures to govern generative AI services in April, requiring companies to submit security assessments to authorities prior to launching offerings to the general public. China’s economy and information technology bureau stated in February that Beijing will assist prominent businesses in developing AI models that can compete with ChatGPT.

EUROPEAN UNION * Planning regulations

Key The 11th of May saw EU legislators agree to stricter draft regulations to reign in generative AI and propose a moratorium on facial surveillance. Next month, the European Parliament will deliberate on the AI Act draft.

In April, EU legislators struck an agreement on a draft that could pave the way for the world’s first comprehensive laws regulating the technology. Protection of intellectual property is central to the bloc’s efforts to rein in AI.

The European Data Protection Board, which unites Europe’s national privacy watchdogs, announced in April that it had established a task force on ChatGPT, a potentially significant first step toward a common policy on establishing privacy rules for artificial intelligence.

The European Consumer Organisation (BEUC) shares the concern regarding ChatGPT and other artificial intelligence (AI) chatbots, and has urged EU consumer protection agencies to investigate the technology and the potential harm to individuals.

FRANCE * Investigating potential violations

The French privacy authority CNIL announced in April that it was investigating a number of complaints about ChatGPT, after the chatbox was temporally banned in Italy for allegedly violating privacy regulations. In March, the French National Assembly approved the use of AI video surveillance during the 2024 Olympic Games in Paris, despite warnings from civil rights groups.

G7 * Seeking input on regulatory measures

Group of Seven sophisticated nations should adopt “risk-based” AI regulation, G7 digital ministers stated following their April 29-30 meeting in Japan. IRELAND

  • Seeking feedback on rules In April, Ireland’s data protection chief stated that generative AI must be regulated, but that governing bodies must figure out how to do so properly before racing into prohibitions that “really won’t stand up.”

ITALY * Ban lifted

OpenAI announced on April 28 that ChatGPT is once again accessible to Italian users. In March, Italy imposed a temporary prohibition on ChatGPT after its data protection authority raised concerns about potential privacy violations and the failure to verify that users were at least 13 years old, as requested.

SPAIN * Investigating potential violations

The Spanish data protection agency announced in April that it was initiating an investigation into potential data violations by ChatGPT. It has also requested that the EU’s privacy authority evaluate ChatGPT’s privacy concerns, the agency told Reuters in April. U.S.

  • Seeking feedback on rules The head of the U.S. Federal Trade Commission stated on May 3 that the agency was committed to enforcing existing laws to prevent some of the threats posed by artificial intelligence, such as augmenting the power of dominant firms and “turbocharging” fraud.

Senator Michael Bennet introduced a bill on April 27 that would establish a task force to examine U.S. policies on artificial intelligence and determine the most effective means of minimizing threats to privacy, civil liberties, and due process. In April, the Biden administration requested public feedback on prospective accountability measures for AI systems.

President Joe Biden had previously informed science and technology advisors that artificial intelligence could aid in the fight against disease and climate change, but that it was also crucial to address potential threats to society, national security, and the economy.

Leave a Reply