OpenAI CEO Testifies In First-Ever AI Senate Hearing
The global impact of artificial intelligence continues to grow, forcing countries to discuss how to regulate it. That is why the US Senate recently brought OpenAI CEO Sam Altman and other tech leaders for an unprecedented AI hearing. He and the senators agreed that the government needs AI laws.
This discussion will help you understand how quickly AI has shifted our daily lives. More importantly, it will explain how it will change further through government intervention. That way, you can prepare for the new laws affecting your use of artificial intelligence and everyday life.
This article will discuss the main points covered in the AI Senate hearing. Also, I will include examples of the technologies mentioned for further clarification. Read further to understand the expanding influence of artificial intelligence.
The 6 key takeaways from the AI Senate hearing
- Senator Richard Blumenthal opened with a deepfake.
- AI job replacement issue remains unresolved.
- Everyone believes AI needs regulations.
- AI may interfere with the upcoming US elections.
- Senators raised concerns about AI copyright.
- National security is too broad to discuss.
1. Senator Richard Blumenthal opened with a deepfake.
Senator Blumenthal started the AI Senate hearing thematically by playing an AI-generated recording of his voice discussing ChatGPT. These sound bites are also known as deepfakes.
He created it using audio from his speeches. Then, Blumenthal asked ChatGPT how the senator would open the hearing. However, his unique introduction is more than a mere publicity stunt.
It sets the tone for the rest of the meeting by emphasizing generative AI’s capabilities. More importantly, Blumenthal drew attention to AI’s risks by making it speak in his voice.
Nowadays, many tools can speak using someone else’s voice. For example, Microsoft is developing VALL-E, an AI program that uses a three-second clip to say anything in someone’s voice.
2. AI job replacement issue remains unresolved.
One of the biggest concerns about artificial intelligence is that it may replace millions of jobs. ChatGPT showed it could automate repetitive office tasks, which comprise a huge portion of nationwide employment.
The US Senate asked OpenAI CEO, Sam Altman, to confirm that prediction. “I believe that there will be far greater jobs on the other side of this and that the jobs of today will get better,” he said.
You may also like: More Young People Are Using Hearing Aids
Unfortunately, nobody confirmed who should be responsible for protecting human employment. “I think it will require a partnership between the industry and government, but mostly action by the government to figure out how we want to mitigate that,” said Altman.
IBM chief privacy and trust officer Christina Montgomery responded. She said the country must prepare its workforce for AI-related skills through education and training.
That would be a monumental task because modern generative AI can produce more than text. For example, text-to-image tools like Midjourney and DALL-E threaten artists by producing images in seconds.
3. Everyone believes AI needs regulations.
Senator Lindsey Graham asks the witnesses at the hearing on artificial intelligence regulation if there should be an agency to license and oversee AI tools.
All say yes, but IBM Chief Privacy & Trust Officer Christina Montgomery has stipulations: pic.twitter.com/UD7R8N7s23
— Yahoo Finance (@YahooFinance) May 16, 2023
Senator Dick Durbin noted the unusually cooperative conversation between the public and private sectors. “I can’t recall when we’ve had people representing large corporations, or private sector entities come before us and plead with us to regulate them,” he said.
Altman and Montgomery showed their willingness to form government oversight. Also, the ChatGPT creator believed Section 230 requires amendments.
It is also known as the 1996 Communications Decency Act. It states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The OpenAI CEO said Section 230 does not apply to generative AI. That means a new framework should hold businesses liable for offering this technology.
NYU emeritus professor Gary Marcus believed the government might create a new federal agency for artificial intelligence. Also, the AI Senate hearing pondered on licensing generative AI.
4. AI may interfere with the upcoming US elections.
Sam Altman, the CEO of the company behind ChatGPT, expressed concern that artificial intelligence could "go quite wrong" at a Senate committee hearing on Tuesday focusing on how to regulate the rapidly developing field of AI. https://t.co/iDMAW6BUb7
— CBS News (@CBSNews) May 17, 2023
The AI Senate hearing noted the rampant social media misinformation during the 2016 and 2020 elections. The participants discussed how Congress failed to hold these platforms accountable for content moderation.
Sam Altman acknowledged AI’s potential to spread biased and inaccurate information. “Given that we’re gonna face an election next year, and these models are getting better. I think this is a significant area of concern,” said the OpenAI CEO.
You may also like: Instagram Delays Kids Version After Backlash
He said he approves labels on generative AI content to notify users. However, Marcus said the key issue is transparency. Also, everyone should know how the algorithm works:
“One of the things that I’m most concerned about with GPT. Four is that we don’t know what it’s trained on. I guess Sam knows, but the rest of us do not. And what it is trained on has consequences for essentially the biases of the system.”
5. Senators raised concerns about AI copyright.
The Committee members added levity to the discussion despite its serious issue. Senator Jon Ossoff and Cory Booker jokingly called each other brilliant and handsome.
Senator Peter Welch made a self-deprecating remark about his interest in the AI Senate hearing. “Senators are noted for their short attention spans, but I’ve sat through this entire hearing and enjoyed every minute of it,” he stated.
The Senators asked Altman about OpenAI’s automated music tool and AI copyright. Senator Marsha Blackburn said it might create a song mimicking his favorite artist Garth Brooks.
Senator Mazie Hirono had a similar concern regarding deepfakes resembling her favorite BTS songs. Despite the silly examples, they raised valid points about intellectual property.
6. National security is too broad to discuss.
'We have built machines that are like bulls in a china shop: powerful, reckless, and difficult to control.'
NYU Professor Gary Marcus sounded the alarm during a Senate hearing Tuesday on the emerging threats of AI. pic.twitter.com/d2b2kdJBWK
— NowThis (@nowthisnews) May 16, 2023
The AI Senate hearing lasted three hours. It covered numerous topics, like employment, intellectual property, misinformation, privacy, safety, and discrimination.
However, that time was insufficient to address how it could impact the economy or global threats. Blumenthal only mentioned these in his concluding remarks.
“The sources of threats to this nation in this space are very real and urgent,” the senator said. Yet, he emphasized, “We’re not going to deal with them today, but we do need to deal with them.”
The Connecticut representative’s remarks are valid as more countries develop AI programs. For example, China’s Alibaba announced its ChatGPT-like bot and AI services.
OpenAI CEO Sam Altman and other AI experts publicly discussed AI’s impact with US senators. Both parties agree that the country needs to regulate this technology.
The public sector showed the American people that they were familiar with the effects of artificial intelligence. Meanwhile, the private sector showed its commitment to complying with regulations.
That AI Senate hearing shows that artificial intelligence is becoming more prevalent daily. Prepare by learning the latest tips and trends from Inquirer Tech.