ChatGPT: OpenAI CEO Sam Altman wants AI controlled 2023

The CEO of OpenAI, which created AI chatbot ChatGPT and picture creator Dall-E 2, testified at a very calm congressional hearing this week. Sam Altman was well-received by the subcommittee throughout the three-hour meeting.

OpenAI’s CEO advised US politicians to regulate AI. “If this technology goes wrong, it can go quite wrong,” Altman stated in his first hearing before Congress on May 16.

Altman was the newest Silicon Valley star. The OpenAI CEO was greeted more warmly than Facebook’s Mark Zuckerberg or TikTok’s Shou Zi Chew.

Altman discussed the technology’s pros and cons. Many tech observers were surprised that the senators seemed to take his concerns.

The CEO of OpenAI warned that AI might “cause significant harm to the world” and called for governmental safeguards.

What prompted OpenAI CEO testimony?

Altman attended a Senate Judiciary subcommittee hearing on the definition of AI. Congress must comprehend technology, especially complicated and fast-moving AI, to govern it.

Lawmakers’ best chance is to hear from OpenAI’s CEO, the Microsoft-backed firm behind ChatGPT. It was the Senate’s first major AI hearing. “As this technology advances, we understand people are anxious about how it could change our lives. “We are,” OpenAI CEO declared during Senate hearing.

Senators reiterated South Carolina Republican Lindsey Graham’s comparison of AI technology to a nuclear reactor requiring a license and regulator.

According to Bloomberg, Altman stated, “I would form a new agency that licenses any effort above a certain scale of capabilities — and can take that license away and ensure compliance with safety standards.” Such a US body might impact worldwide AI policy.

Lawmakers agreed that Congress works too slowly to keep up with innovation, especially in AI, and that a new body should draft laws for such a dynamic business.

The Senate Judiciary Committee’s privacy, technology, and law subcommittee head, Connecticut Democrat Richard Blumenthal, said AI companies should test their systems and identify known hazards before releasing them. Blumenthal is worried about AI disrupting the work market.

Altman agreed but had a more hopeful outlook on work. The OpenAI CEO appeared to be confronted by his deepest fears about the technology. Altman mainly skirted details, saying that the sector may inflict “significant harm to the world” and that “if this technology goes wrong, it can go quite wrong.”

He then suggested that the new regulatory agency prevent AI models from “self-replicating and self-exfiltrating into the wild.” Altman said that OpenAI worries about how the technology could affect elections. Not social media. This differs. We need a new response.”

Senators and witnesses agreed that stopping innovation in the US would be foolish when considering whether businesses like OpenAI should stop developing generative AI technologies. Competitors like China develop AI.

Altman clarified that OpenAI has not yet planned the next version of their important language model-based capabilities. “We are not currently training what will be GPT-5,” he stated.

Leave a Reply