White House says tech giants have ‘moral’ duty on AI
The White House on Thursday told the CEOs of US AI giants that they have a “moral” responsibility to protect society from the potential dangers of artificial intelligence (AI).
Vice President Kamala Harris had summoned the heads of Google, Microsoft, OpenAI, and Anthropic to strategize about the impact of AI, afraid that companies are running blindly into technology that could pose serious harm to society.
Article continues after this advertisementHarris told the CEOs, which included Sundar Pichai of Google and Satya Nadella of Microsoft, that they have a “moral” duty to safeguard society from AI’s potential dangers.
READ: Navigating the Ethical Minefield: AI and Chatbot Responsibilities
Companies “must comply with existing laws to protect the American people” as well as “ensure the safety and security of their products,” Harris said in a statement after the talks.
Article continues after this advertisementUS President Joe Biden also insisted on that point when he briefly dropped by the meeting, telling the assembled CEOs, “What you’re doing has enormous potential and enormous danger.
“I know you understand that. And I hope you can educate us as to what you think is most needed to protect society as well as to the advancement,” he said, according to a video posted later by the White House.
READ: IBM eyes hiring pause because AI does the job
Biden has urged Congress to pass laws setting stricter limits on the tech sector, but these efforts have little chance of making headway given political divisions.
The lack of rules has given Silicon Valley freedom to put out new products rapidly, and stoked fears that AI technologies will wreak havoc on society before the government can catch up.
“It’s good to try to get ahead of this. It’s definitely going to be a challenge but it’s one I think we can handle,” OpenAI CEO Sam Altman told reporters before the meeting.
READ: Google AI pioneer says he quit to speak freely about technology’s ‘dangers’
His company, supercharged by billions of dollars from Microsoft, took the lead in making AI available to everyday consumers, with the release of ChatGPT, which caused a global sensation five months ago.
Microsoft quickly integrated the AI chatbot’s abilities to crank out natural-seeming written responses from short prompts into its Bing search engine and other products.
The Windows-maker on Thursday expanded public access to these generative artificial intelligence programs, despite criticism and the meeting at the White House.
READ: Salceda: AI world sans strong IP laws could kill PH multimedia art sector
Risks from AI include its potential uses for fraud, with voice clones, deep-fake videos and convincingly written messages.
It is also a threat to white collar jobs, especially, for now, lower-skilled back-office work.
A range of experts in March urged a pause in the development of powerful AI systems to allow time to make sure they are safe, though a halt was widely seen as unlikely.
READ: How AI could upend the world even more than electricity or the internet
The White House used Thursday’s meeting to announce new actions to “promote responsible American innovation in artificial intelligence.”
This included directing $140 million to expand AI research and setting up an assessment system that would work in cooperation with big tech to “fix issues.”
“Don’t get your hopes up that this will lead to anything particularly meaningful, but it’s a good start,” said David Harris, a lecturer at Haas Business School at the University of California, Berkeley.
READ: AI Copyright: Who Owns AI Artworks?
Race to the bottom
Google, Meta, and Microsoft have spent years working on AI systems to help with translations, internet searches, security, and targeted advertising.
But late last year, San Francisco-based OpenAI thrust generative AI into the public consciousness when it launched ChatGPT, forcing its rivals to answer.
Google has invited users in the United States and Britain to test its AI chatbot, known as Bard, with Facebook-owner Meta pointing to new uses in its ad tech.
READ: US begins study of possible rules to regulate AI like ChatGPT
And Billionaire Elon Musk in March founded an AI company called X.AI, based in the US state of Nevada, according to business documents.
A top US regulator put AI in the crosshairs ahead of the White House meeting, signaling that the US government would not fall behind when it came to setting up rules and guardrails.
“Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control?” Federal Trade Commission chief Lina Khan wrote in a guest essay in the New York Times.
“Yes – if we make the right policy choices.”
RELATED STORIES
AI and the illusion of human connection
UK reviews AI models as rapid growth fuels safety concerns