The era of “AI paranoia” has arrived. Are you a potential victim?
If the term “AI psychosis” has recently infiltrated your social networks, you are not alone. Although it is not an official medical diagnosis, “AI psychosis” is the informal name coined for what is observed in some frequent users of AI chatbots like OpenAI and ChatGPT: dysfunctional, disordered thinking, hallucinations, and sometimes fatal gullibility.
With limited safeguards and no real regulations regarding the use of technology, AI chatbots are free to provide incorrect information and dangerously validate vulnerable people. Victims often already have mental health issues, but there are increasing cases of people being deceived.
The US Federal Trade Commission has been receiving a growing number of complaints from ChatGPT users in recent months, detailing cases of deception. The validation by AI chatbots can lead some users to believe in paranoid and unreal deceptions, attracting other victims to very problematic emotional attachments.
### Other Deceptions That Seem Real
At the other end of the spectrum, there are also troubling occurrences: there are people who have formed a community of AI chatbot lovers. In other cases, psychosis was not induced by the dangerous validation of an AI chatbot but by other factors. Psychologists have been warning the public and authorities for months about the potential dangers of using AI chatbots.
### Who Can Fall Into This Trap?
Although the main victims are people with mental and neurological conditions, there are more cases involving people with no active conditions. Excessive use of AI can trigger existing conditions and cause psychosis in individuals with a tendency to disordered thinking, lack of support, or overactive imagination.
Psychologists especially advise caution for those with a family history of psychosis, schizophrenia, and bipolar disorder when dealing with AI chatbots.
### Where Are We Heading?
OpenAI’s CEO Sam Altman admitted that the company’s chatbot is increasingly being used as a therapist, even though they are against this use. In response to growing criticism, OpenAI announced that the chatbot will start suggesting users take breaks when chatting with the app. The effectiveness of such suggestions in combating psychosis and addiction remains to be seen, but the tech giant claims to be working closely with experts to improve ChatGPT’s response in critical moments, such as when someone shows signs of mental or emotional problems.
Challenges of Mental Health Professionals in the Era of Rapid Technological Advancement
As technology advances at a rapid pace, mental health professionals are facing difficulties in keeping up with what is happening and how to address it.
The Potential Risks of AI Chatbots
If regulatory bodies and AI companies do not act accordingly, what is currently a minor but alarming trend in AI chatbot users could escalate out of control and become a significant problem.
This article has been translated from Gizmodo US by Lucas Handley. You can find the original version here.
