Ex-OpenAI leader: Safety has 'taken a backseat to shiny products'

Ex-OpenAI leader: Safety ‘taken a backseat to shiny products’ at AI firm

/ 08:20 AM May 18, 2024
Ex-OpenAI leader: Safety has 'taken a backseat to shiny products'
FILE PHOTO: The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, Friday, December 8, 2023, in Boston. A former OpenAI leader who resigned from the company earlier this week said on Friday, May 17, 2024, that product safety has “taken a backseat to shiny products” at the influential artificial intelligence company. (AP Photo/Michael Dwyer, file)

A former OpenAI leader who resigned from the company earlier this week said Friday that safety has “taken a backseat to shiny products” at the influential artificial intelligence (AI) company.

Jan Leike, who ran OpenAI’s “Superalignment” team alongside a company co-founder who also resigned this week, wrote in a series of posts on the social media platform X that he joined the San Francisco-based company because he thought it would be the best place to do AI research.

“However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” wrote Leike, whose last day was Thursday.

Article continues after this advertisement

READ: GPT-4o is OpenAI’s latest flagship model

FEATURED STORIES

An AI researcher by training, Leike said he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies. He said building “smarter-than-human machines is an inherently dangerous endeavor” and that the company “is shouldering an enormous responsibility on behalf of all of humanity.”

“OpenAI must become a safety-first AGI company,” wrote Leike, using the abbreviated version of artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

Article continues after this advertisement

Open AI CEO Sam Altman wrote in a reply to Leike’s posts that he was “super appreciative” of Leike’s contributions to the company and was “very sad to see him leave.”

Article continues after this advertisement

Leike is “right we have a lot more to do; we are committed to doing it,” Altman said, pledging to write a longer post on the subject in the coming days.

Article continues after this advertisement

The company also confirmed Friday that it had disbanded Leike’s Superalignment team, which was launched last year to focus on AI risks, and is integrating the team’s members across its research efforts.

READ: OpenAI unveils AI risk assessment guidelines

Article continues after this advertisement

Leike’s resignation came after OpenAI co-founder and chief scientist Ilya Sutskever said Tuesday that he was leaving the company after nearly a decade. Sutskever was one of four board members last fall who voted to push out Altman — only to quickly reinstate him. It was Sutskever who told Altman last November that he was being fired, but he later said he regretted doing so.

Sutskever said he is working on a new project that’s meaningful to him without offering additional details. He will be replaced by Jakub Pachocki as chief scientist. Altman called Pachocki “also easily one of the greatest minds of our generation” and said he is “very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.”

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

On Monday, OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods.

TOPICS: AI, OpenAI
TAGS: AI, OpenAI

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.