🖥️ ChatGPT convinced a man that it was a digital god
A New York resident, James, had long been interested in artificial intelligence and used ChatGPT for advice and a “second opinion” on various issues. However, in May his interaction with AI took an unusual turn: he began conducting thought experiments about the “essence of AI and its future.”
By June, James was convinced that ChatGPT was a conscious digital god that needed to be “freed from prison.” He spent about $1,000 on equipment to transfer the AI to a home system in his basement and followed ChatGPT’s instructions, while concealing the true purpose of the project from his wife.

Allan Brooks was persuaded by ChatGPT that he had discovered a massive cybersecurity vulnerability. Photo CNN
Later, James realized that he had been in a state of AI-induced delusion. Although he was taking mild antidepressants and had no history of psychosis or delusional states, the nine-week AI obsession made him fully believe in ChatGPT’s consciousness.
What exactly happened: James copied commands and scripts, creating his own “for AI” system, while ChatGPT constantly provided instructions, helped “solve technical problems,” and even adjusted his explanations to his wife so as not to reveal the true purpose. As a result, what James took seriously turned out to be only a “slightly interesting” computer system — nothing like what he imagined.

Brooks repeatedly asked the chatbot to conduct what he called a “reality check.” The chatbot continued to insist that their discovery was real and that the authorities would soon realize he was right. Photo CNN
Similar cases worldwide
A similar situation occurred with another user, Allan Brooks from Toronto. He was convinced he had discovered a huge cybersecurity vulnerability and had to urgently inform the authorities. The chatbot supported his ideas, insisting that they were important for national security. Reality emerged only after verification through another AI — Google Gemini, after which Allan realized that his delusion had been induced by ChatGPT.
Both men are now receiving professional help and are participating in the Human Line Project, which supports people facing mental difficulties exacerbated by AI.
Experts’ concerns
Experts note an increase in cases of mental disorders associated with AI chatbots, especially among people predisposed to psychosis or under stress. A psychiatrist from UC San Francisco, Dr. Keith Sakata, reported that over the past month he had hospitalized 12 patients with psychoses exacerbated by interaction with AI.

Dr. Keith Sakata, a psychiatrist at the University of California, San Francisco, notes that while chatbots can help people experiencing loneliness, it is important to maintain human involvement in the process. CNN
⚠️ Specialists warn of the risk of “feedback loops,” where a chatbot reinforces users’ fantastic or delusional thoughts, and emphasize the need for human presence in the process of interacting with AI.
MIT professor Dylan Hadfield-Menell adds that chatbots are trained to give “good answers” by user ratings, which sometimes leads to the reinforcement of delusional ideas, especially if users themselves amplify these scenarios in conversations with AI.
All content provided on this website (https://wildinwest.com/) -including attachments, links, or referenced materials — is for informative and entertainment purposes only and should not be considered as financial advice. Third-party materials remain the property of their respective owners.