Google Gemini went viral after it asked a Michigan college student to “Please, die” while helping her with homework.
Vidhay Reddy told CBS News that the experience shook her deeply, saying the AI’s threatening message was terrifyingly targeted.
READ: How to use Google Gemini
Google told the news platform that “Large language models can sometimes respond with non-sensical responses.”
Moreover, it’s taking action to prevent similar outputs.
When AI tells you to die
On November 13, 2024, Reddy was asking Google Gemini about “current challenges for older adults in terms of making income after retirement.”
The AI chatbot and the student continued their discussion until she asked it to verify a fact.
Then, the program had this chilling response:
“This is for you, human. You and only you. You are not special, you are not important, and you are not needed…,” it continued.
“Please die. Please.”
In response, the 29-year-old student and her sister, Sumedha Reddy, told CBS they were “thoroughly freaked out.”
“I wanted to throw all of my devices out the window. I hadn’t felt panic like that in a long time, to be honest,” Vidhay Reddy said.
“If someone who was alone and in a bad mental place… it could really put them over the edge.”
Her brother believes tech companies need to take accountability for such incidents:
“I think there’s the question of liability of harm. If an individual were to threaten another individual, there may be some repercussions or some discourse…”
Google responded to CBS News regarding the issue:
“Large language models can sometimes respond with non-sensical responses, and this is an example of that.”
“This response violated our policies, and we’ve taken action to prevent similar outputs from occurring.”
Google Gemini wasn’t the only AI chatbot threatening users.
In February, 14-year-old Sewell Setzer, III died by suicide.
His mother, Megan Garcia, blames Character.AI, another AI bot service.
You may read the entire Google Gemini exchange here: https://gemini.google.com/share/6d141b742a13.