Exploring How AI is Shaping People’s Behavior
The use of artificial intelligence as an everyday tool is already a reality. From writing texts to solving complex doubts, millions of people turn to systems like ChatGPT every day. But what initially seems practical and useful can become a ticking time bomb when it comes to health. The new viral phenomenon makes it clear that the risk goes far beyond what we imagined.
### The unsettling trend of DIY aesthetic treatments
More and more users, especially in communities like the Reddit one called DIYaesthetics, share experiences and tips on how to self-inject botox or hyaluronic acid at home, without medical training or supervision. What is most alarming is that many of them admit to asking for advice on how to do it: what needle to use, at what depth to inject, or which area of the face is most suitable.
In one of the most discussed cases, a user asked the AI if she should wear gloves during the procedure. The responses not only failed to alert her about the dangers, but other users validated the question without showing concern. In another thread, after experiencing facial deformation, another person asked the AI what could have happened. The response, although apparently reassuring, lacked the necessary clinical rigor.
### AI as an improvised doctor: a growing trend
The phenomenon is not limited to the aesthetic world. Many people have started using AI tools as if they were primary care doctors, especially in mental health issues or to obtain quick and free diagnoses. Some studies even claim that ChatGPT offers clearer answers than medical services, although others highlight that more than 30% of its medical responses contain errors.
This type of behavior reflects a mix of desperation, lack of knowledge, and excessive trust in technology. The combination can be very dangerous if a framework for responsible use is not established.
### AI in medicine: potential yes, but with limits
Not everything is negative. Artificial intelligence has become a powerful ally in the healthcare field: in China, for example, it is being used to detect pancreatic cancer, and in many laboratories, it helps speed up research. The problem arises when a support tool is confused with a substitute for professional medical judgment.
The reflection is clear: the danger is not in artificial intelligence itself, but in how we decide to use it. While it can help us advance in science and medicine, it should not be the guide for delicate procedures that put our health at risk. And even less so when it comes to manipulating needles in front of the mirror.
