Amazon, Google, Meta, Microsoft, other tech firms agree to White House-set AI safeguards | Inquirer Technology

Amazon, Google, Meta, Microsoft, other tech firms agree to White House-set AI safeguards

/ 06:25 AM July 22, 2023

Amazon, Google, Meta, Microsoft, and other tech firms agree to AI safeguards set by the White House

President Joe Biden speaks about artificial intelligence in the Roosevelt Room of the White House, Friday, July 21, 2023, in Washington, as from left, Adam Selipsky, CEO of Amazon Web Services; Greg Brockman, President of OpenAI; Nick Clegg, President of Meta; and Mustafa Suleyman, CEO of Inflection AI, listen. (AP Photo/Manuel Balce Ceneta)

WASHINGTON — President Joe Biden said Friday that new commitments by Amazon, Google, Meta, Microsoft, and other companies that are leading the development of artificial intelligence (AI) technology to meet a set of AI safeguards brokered by his White House are an important step toward managing the “enormous” promise and risks posed by the technology.

Biden announced that his administration has secured voluntary commitments from seven US companies meant to ensure that their AI products are safe before they release them. Some of the commitments call for third-party oversight of the workings of the next generation of AI systems, though they don’t detail who will audit the technology or hold the companies accountable.

ADVERTISEMENT

“We must be clear-eyed and vigilant about the threats emerging technologies can pose,” Biden said, adding that the companies have a “fundamental obligation” to ensure their products are safe.

FEATURED STORIES

“Social media has shown us the harm that powerful technology can do without the right safeguards in place,” Biden added. “These commitments are a promising step, but we have a lot more work to do together.”

READ: Amazon, Google, Apple, Meta, Microsoft say they meet EU gatekeeper status

A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.

The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.

That testing will also examine the potential for societal harms, such as bias and discrimination, and more theoretical dangers about advanced AI systems that could gain control of physical systems or “self-replicate” by making copies of themselves.

The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images or audio known as deepfakes.

ADVERTISEMENT

READ: Tech titans promise watermarks to expose AI creations

Executives from the seven companies met behind closed doors with Biden and other officials Friday as they pledged to follow the standards.

“He was very firm and clear” that he wanted the companies to continue to be innovative, but at the same time “felt that this needed a lot of attention,” Inflection CEO Mustafa Suleyman said in an interview after the White House gathering.

“It’s a big deal to bring all the labs together, all the companies,” said Suleyman, whose Palo Alto, California-based startup is the youngest and smallest of the firms. “This is super competitive and we wouldn’t come together under other circumstances.”

The companies will also publicly report flaws and risks in their technology, including effects on fairness and bias, according to the pledge.

READ: Facebook parent Meta makes public its ChatGPT rival Llama

The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology.

Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.

“A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” said Amba Kak, executive director of the AI Now Institute. “We need a much more wide-ranging public deliberation, and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models.”

While voluntary, agreeing to submit to “ red team” tests that poke at their AI systems is not an easy promise, said Suleyman.

READ: How AI could upend the world even more than electricity or the internet

“The commitment we’ve made to have red-teamers basically try to break our models, identify weaknesses and then share those methods with the other large language model developers is a pretty significant commitment,” Suleyman said.

Senate Majority Leader Chuck Schumer (Democrat-New York) has said he will introduce legislation to regulate AI and is working closely with the Biden administration “and our bipartisan colleagues” to build upon the pledges made Friday.

A number of technology executives have called for regulation, and several attended an earlier White House summit in May.

Microsoft President Brad Smith said in a blog post Friday that his company is making some commitments that go beyond the White House pledge, including support for regulation that would create a “licensing regime for highly capable models.”

READ: Who’s afraid of AI?

Some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems adhere to regulatory strictures.

The White House pledge notes that it mostly only applies to models that “are overall more powerful than the current industry frontier,” set by recent models such as OpenAI’s GPT-4 and image generator DALL-E 2 and similar releases from Anthropic, Google and Amazon.

A number of countries have been looking at ways to regulate AI, including European Union lawmakers negotiating sweeping AI rules for the 27-nation bloc that could restrict applications deemed to have the highest risks.

UN Secretary-General Antonio Guterres recently said the United Nations is “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.

READ: Google parent to lay off 12,000 workers as AI focus heightens

Guterres also said he welcomed calls from some countries for the creation of a new UN body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.

The White House said Friday that it has consulted on the voluntary commitments with a number of countries.

The pledge is heavily focused on safety risks but doesn’t address other worries about the latest AI technology, including the effect on jobs and market competition, the environmental resources required to build the models, and copyright concerns about the writings, art and other human handiwork being used to teach AI systems how to produce human-like content.

Last week, OpenAI and The Associated Press announced a deal for the AI company to license AP’s archive of news stories. The amount it will pay for that content was not disclosed.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

READ: Apple is reportedly developing a ChatGPT-like AI tool

TOPICS: AI, Joe Biden, Social Media, United States
TAGS: AI, Joe Biden, Social Media, United States

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.