Tech giants unite for AI safety initiative | Inquirer Technology

Tech giants unite for AI safety initiative

/ 12:47 PM July 29, 2023

tech giants-artificial intelligence

ETX Daily Up

Four US leaders in artificial intelligence (AI) announced Wednesday the formation of an industry group devoted to addressing risks that cutting edge versions of the technology may pose.

Anthropic, Google, Microsoft, and ChatGPT-maker OpenAI said the newly created Frontier Model Forum will draw on the expertise of its members to minimize AI risks and support industry standards.

Article continues after this advertisement

The companies pledged to share best practices with each other, lawmakers and researchers.

FEATURED STORIES

“Frontier” models refer to nascent, large-scale machine-learning platforms that take AI to new levels of sophistication — and also have capabilities that could be dangerous.

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” Microsoft president Brad Smith said in a statement.

Article continues after this advertisement

“This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

Article continues after this advertisement

US President Joe Biden evoked AI’s “enormous” risks and promises at a White House meeting last week with tech leaders who committed to guarding against everything from cyberattacks to fraud as the sector grows.

Article continues after this advertisement

Standing alongside top representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, Biden said the companies had made commitments to “guide responsible innovation” as AI spreads ever deeper into personal and business life.

Ahead of the meeting, the seven AI giants committed to a series of self-regulated safeguards that the White House said would “underscore three principles that must be fundamental to the future of AI: safety, security and trust.”

Article continues after this advertisement

In their pledge, the companies agreed to develop “robust technical mechanisms,” such as watermarking systems, to ensure users know when content is from AI and not humans.

Core objectives of the Frontier Model Forum include minimizing risks and enabling independent safety evaluations of AI platforms, the companies involved said in a release.

The Forum will also support the development of applications intended to take on challenges such as climate change, cancer prevention and cyber threats, according to its creators.

Others pursuing AI breakthroughs were invited to join the group.

“Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance,” said OpenAI vice president of global affairs Anna Makanju.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

“It is vital that AI companies -– especially those working on the most powerful models –- align on common ground and advance thoughtful and adaptable safety practices.”

TOPICS: Artificial Intelligence
TAGS: Artificial Intelligence

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.