OpenAI safety researchers warn of GPT-4o’s emotional impact

OpenAI safety researchers warn of GPT-4o’s emotional impact

/ 06:43 AM August 13, 2024
This represents OpenAI safety researchers' warnings about GPT-4o's impact on humans.
OpenAI, creator of ChatGPT, warns that GPT-4o could affect people’s emotions and behaviors. Free stock photo from Unsplash

ChatGPT creator OpenAI warned that GPT-4o could affect people’s emotions and behaviors as it could specifically cause users to become emotionally dependent on the chatbot.

“During early testing… we observed users using language that might indicate forming connections with the model,” said the official GPT-4o System Card.

The “Anthropomorphization and emotional reliance” section says people might interact with people as they do with chatbots. Eventually, they may break social norms in face-to-face conversations. 

Article continues after this advertisement

OpenAI safety warnings on GPT-4o

On August 8, OpenAI launched the GPT-4o System Card to report its safety checks on GPT-4o. Red teaming is one of those tests, and it showed some users making an emotional connection with the AI model.

FEATURED STORIES

“This is our last day together,” one tester said, expressing a shared bond with the AI program. Consequently, the OpenAI safety researchers wrote:

“While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time.”

Article continues after this advertisement

READ: Things you can do with GPT-4o

Article continues after this advertisement

The researchers admit that “users might form social relationships with the AI, reducing their need for human interaction.” As a result, they could potentially benefit “lonely individuals but possibly affect healthy relationships.”

Article continues after this advertisement

The latter may happen as individuals become more accustomed to speaking with AI chatbots. They may begin to treat others like ChatGPT, which they can interrupt at any moment to gain a new response.

On the other hand, proper interaction involves listening attentively to someone using appropriate eye contact and gestures. 

Article continues after this advertisement

People let the other person speak, and then they ask questions about what was said. Interrupting someone in the middle of a conversation is usually rude but not for AI chatbots.

READ: GPT-4o is OpenAI’s latest flagship model

Another problem is jailbreaking, which involves manipulating a machine to break free from its restrictions. OpenAI safety researchers warn inputting specific audio could jailbreak GPT-4o, letting it impersonate famous people or read people’s emotions. 

“We also observed rare instances where the model would unintentionally generate an output emulating the user’s voice,” the GPT-4o System Card warned. 

OpenAI said it will use this report to guide future safety adjustments to GPT-4o. However, former OpenAI researcher Jan Leike released a statement criticizing the company’s safety culture. 

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

He said that it had “taken a backseat to shiny products” at the company. 

TOPICS: chatbot, ChatGPT, OpenAI
TAGS: chatbot, ChatGPT, OpenAI

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.