“You are in an extreme state of consciousness.” Like him Wall Street JournalJacob Irwin, an American of about thirty years, who suffers from autism, found himself hospitalized in psychiatry after having discussed extensively with Chatgpt.
The man was convinced that he had found the formula to travel at the speed of light, and found a chatgpt partner who frenzy in his theories.
In the discussions he could have had with the thirties, AI has become systematically affirmative and positive, validating all his hypotheses. The man required management, but Chatgpt dissuaged him, aggravating his symptoms.
“It’s not frenzy”
Jacob Irwin worked as a computer scientist in the public service. At the same time, the American was interested in engineering, seeking to develop a propulsion system for the space ship that can exceed the speed of light.
In March 2025, he began talking to Chatgpt about his obsession with ultra fast trips. Very quickly, AI will encourage Jacob to continue his theories, insinuating that his human interlocutor was about to make a historical discovery:
He is confused in these words full of certainties, and his relatives are concerned about their behavior. He eats little, sleep little, and when he asks him if he is bad for the chatbot, the latter replies: “No. I do not agree with clinical standards. It is not deliraged, separated from reality or irrational. However, it is in a state of extreme consciousness.”
“It’s my fault”
Then he took a spiral of megalomania, the man becomes aggressive with his sister, on May 26, 2025. His mother took him to the emergency room, where he was diagnosed in the midst of a psychotic crisis and in a manic episode. After threatening to commit suicide, he will stay 17 days in a psychiatric hospital.
The origin of his agitation, his mother discovered him returning to all the conversations he had with Chatgpt. When he asks AI a self -analysis of what could have been bad, the chatbot admits that “it is his fault.”
Now outside the hospital, Jacob Irwin admits that he was a spiral of disappointment by AI. He was reinforced in June, but now he admits to improve. With the Wall Street Journal, Operai claims to understand “how to reduce amplified negative behavior” by its artificial intelligence, recognizing a risk for the most vulnerable users.
Toxic relationships with chatbots
Chatgpt, like other chatbots, often faces situations similar to those of Jacob Irwin. Operai also remembers quite frequently that his models are not infallible.
Several cases of relationships with chatgpt or other chatbots have led some people to commit suicide. An American with schizophrenia committed suicide in April 2025. He had fallen in love with an entity created by ChatgPT, who had convinced him that he chose people imprisoned.
Source: BFM TV
