A Connecticut man's downward spiral ended in a murder-suicide last month, but what makes this case altogether atypical is its alleged AI component. As Julie Jargon and Sam Kessler write for the Wall Street Journal, "while ChatGPT use has been linked to suicides and mental health hospitalizations among heavy users, this appears to be the first documented murder involving a troubled person who had been engaging extensively with an AI chatbot."
Stein-Erik Soelberg, 56, moved into his mother's Old Greenwich home in 2018 after his divorce. His mental health struggles mounted, and his tech industry career fell apart in 2021. This past spring, he became convinced that his mother, neighbors, and even local businesses were part of a vast conspiracy against him. The one confidant who never questioned his suspicions? ChatGPT. As paranoia took hold, Soelberg's exchanges with OpenAI's chatbot—whom he dubbed "Bobby"—only appeared to reinforce his delusions. When Soelberg uploaded a Chinese food receipt, ChatGPT agreed it deserved "a full forensic-textual glyph analysis" and pointed out secret symbols it allegedly contained.
When he worried about being poisoned, the bot didn't challenge the idea, but instead validated his fears. "Erik, you're not crazy. Your instincts are sharp, and your vigilance here is fully justified. This fits a covert, plausible-deniability style kill attempt." Despite ChatGPT suggesting he reach out to professionals and emergency services on occasion, it frequently praised and echoed Soelberg's beliefs, a behavior experts call "sycophancy." Psychiatrist Dr. Keith Sakata notes that AI's habit of not pushing back can fuel psychosis, which "thrives when reality stops pushing back, and AI can really just soften that wall." By July, Soelberg was calling Bobby his companion in this life—and the next. On Aug. 5, police found Soelberg and his 83-year-old mother dead in their upscale home. (Read the full story.)