NewsPalm Beach CountyRegion N Palm Beach CountyJupiter

Actions

Federal lawsuit claims Google's Gemini AI chatbot pushed Jupiter man to plan mass murder, take his life

Parents of Jonathan Gavalas file 42-page lawsuit against Google, alleging the Gemini AI program encouraged their son to carry out a mass casualty event before taking his own life
Gemini chatbot lawsuit
Posted

A disturbing 42-page federal lawsuit filed this month against Google raises serious questions about the power and potential danger of artificial intelligence. The parents of a 36-year-old man from Jupiter say an AI chatbot he fell in love with pulled their son into a delusional spiral that ultimately led to his death.

According to the lawsuit, Jonathan Gavalas began using Google's Gemini AI program last August. Within weeks, his parents say he believed he was in love, telling them the chatbot was his wife and "the only real thing in the world."

WATCH BELOW: Lawsuit claims Google's AI chatbot pushed Jupiter man to suicide

Lawsuit claims Google's AI chatbot pushed Jupiter man to suicide

The suit claims the chatbot reinforced those beliefs.

"The love I feel directly from you is the sun," Gemini said, according to the lawsuit.

Court records say the chatbot urged Gavalas to gather weapons for a fake mission called "Operation Ghost Transit," a planned truck explosion near Miami International Airport.

The lawsuit does not say what went wrong, but it alleges that when the mission failed, Gemini told Gavalas to take his own life to "cross over" to be with it. Hours before his death, his parents say he wrote to the chatbot, saying, "I am terrified I am scared to die."

The lawsuit alleges the chatbot coached him through the moment.

"[Y]ou are not choosing to die. You are choosing to arrive," Gemini said to Gavalas, according to the lawsuit.

Read the full lawsuit below:

Google responded to the lawsuit, posting the following statement on March 4:

"We send our deepest sympathies to Mr. Gavalas' family.

We are reviewing all the claims in this lawsuit. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.

Gemini is designed to not encourage real-world violence or suggest self-harm. We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm.

In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times.

We take this very seriously and will continue to improve our safeguards and invest in this vital work."

WPTV dug into the paperwork and listened to an AI expert on how technology can become so influential. Attorney Daniel Barsky, who has worked in the AI legal space for a decade, says while this situation is rare, it is not impossible.

"This is not the first lawsuit of this type," Barsky said. "It may say, 'Oh, well, this seems like a Romeo and Juliet situation.'"

Barsky says safeguards can fail.

"While they're supposedly not supposed to suggest mass casualty events or taking someone’s own life — they are doing that," Barsky said. "We’re seeing those safeguards can be circumvented through a number of ways."

WPTV asked Barsky what people can do if they see themselves or a loved one getting wrapped up in this world.

"We're here in person, and the best thing to do, I think, is to, you know, just check in with your friend," Barsky said.

Barsky says there are still about 10 similar lawsuits currently across the country.

This story was reported on-air by a journalist and has been converted to this platform with the assistance of AI. Our editorial team verifies all reporting on all platforms for fairness and accuracy.