OpenAI safety researchers warn of GPT-4o’s emotional impact

This represents OpenAI safety researchers' warnings about GPT-4o's impact on humans.
OpenAI, creator of ChatGPT, warns that GPT-4o could affect people’s emotions and behaviors. Free stock photo from Unsplash

ChatGPT creator OpenAI warned that GPT-4o could affect people’s emotions and behaviors as it could specifically cause users to become emotionally dependent on the chatbot.

“During early testing… we observed users using language that might indicate forming connections with the model,” said the official GPT-4o System Card.

The “Anthropomorphization and emotional reliance” section says people might interact with people as they do with chatbots. Eventually, they may break social norms in face-to-face conversations. 

OpenAI safety warnings on GPT-4o

On August 8, OpenAI launched the GPT-4o System Card to report its safety checks on GPT-4o. Red teaming is one of those tests, and it showed some users making an emotional connection with the AI model.

“This is our last day together,” one tester said, expressing a shared bond with the AI program. Consequently, the OpenAI safety researchers wrote:

“While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time.”

READ: Things you can do with GPT-4o

The researchers admit that “users might form social relationships with the AI, reducing their need for human interaction.” As a result, they could potentially benefit “lonely individuals but possibly affect healthy relationships.”

The latter may happen as individuals become more accustomed to speaking with AI chatbots. They may begin to treat others like ChatGPT, which they can interrupt at any moment to gain a new response.

On the other hand, proper interaction involves listening attentively to someone using appropriate eye contact and gestures. 

People let the other person speak, and then they ask questions about what was said. Interrupting someone in the middle of a conversation is usually rude but not for AI chatbots.

READ: GPT-4o is OpenAI’s latest flagship model

Another problem is jailbreaking, which involves manipulating a machine to break free from its restrictions. OpenAI safety researchers warn inputting specific audio could jailbreak GPT-4o, letting it impersonate famous people or read people’s emotions. 

“We also observed rare instances where the model would unintentionally generate an output emulating the user’s voice,” the GPT-4o System Card warned. 

OpenAI said it will use this report to guide future safety adjustments to GPT-4o. However, former OpenAI researcher Jan Leike released a statement criticizing the company’s safety culture. 

He said that it had “taken a backseat to shiny products” at the company. 

Read more...