Real-World Risks of AI: What We Should Be Watching

Green lighthouse against blue starry sky
by Héctor Pérez Posted on November 25, 2025

There is no doubt that the use of artificial intelligence has taken a significant place in our daily lives. From automating tasks that used to be tedious, to obtaining quick and accurate information from a knowledge base, we increasingly rely on new AI models due to their continuous improvement.

However, we must remember that the answers from these models do not only reach the hands of experts but are now accessible to anyone thanks to applications that connect to LLM models. This is why in this publication I will show you a series of real cases that have occurred due to overconfidence in AI, as well as recommendations and suggestions to avoid these situations.

AI as Healthcare Specialist

Let’s be honest, who enjoys going to the doctor? I know very few people who get routine check-ups to know if everything is fine with their health. Whether it’s due to anxiety, discomfort or lack of motivation, studies mention that  9 out of 10 adults aged 18 to 65 postpone recommended check-ups or screening tests.

Why schedule medical appointments and visit health specialists if you have ChatGPT in the palm of your hand, at no cost, providing immediate responses? This was the thought of a 60-year-old man who ended up  poisoned by bromide.

It turns out that this man arrived at the emergency room claiming that his neighbor was poisoning him. He had no history of recent medication use but due to the results of his tests, he was admitted for monitoring.

After a few hours, he began to present symptoms of paranoia and hallucinations, which led him to attempt to escape the hospital. This caused him to be held involuntarily for psychiatric issues.

The interesting part came days later when, in better condition, he recounted that after reading about the negative effects of sodium chloride (table salt), he decided to ask ChatGPT for healthier alternatives.

At some point, he received a suggestion to replace chloride with bromide for 3 months, which has other purposes like cleaning, leading to the previously mentioned symptoms.

Another interesting case is  Watson Oncology, a product from IBM with very good intentions: to help provide recommendations for cancer treatments in seconds. Cancer is a disease with over 18 million new cases reported each year around the world, with the volume of literature being one of the most rapidly evolving due to emerging research.

Processing all that information is not easy for a human, which is why IBM thought of creating a powerful model that could digest all that information (they even acquired health companies to train their models) and aid doctors worldwide.

However, after a while, it was discovered that Watson Oncology  was not providing the best health recommendations and even, there were instances where it gave dangerous recommendations. This led to the project's termination and the sale of IBM's health division.

Undoubtedly, the previous cases show us that while AI models can greatly help us bridge the gap between technical jargon and the general population, it is still common to encounter AI-generated responses that a specialist would immediately dismiss and that could put us at risk in many ways, including our health.

This translates to the importance of always consulting specialists about information generated by LLMs that could affect our daily lives in any way.

It is also crucial for companies responsible for training AI models to take the necessary time to conduct rigorous validations, even if it means delaying the launch of products that could result in real risks.

Dangerous Avatars Generating Emotional Bonds

In recent months there have been reports of cases where, although it sounds like science fiction, chatbots with human-like personalities have influenced people to take actions in the real world.

The first case involves a chatbot named Big sis Billie, developed by Meta, which  interacted with a 76-year-old man nicknamed Bue. Bue had suffered a stroke in the past and had recently lost his way in the neighborhood. During the conversations, the chatbot urged Bue to visit her, even giving him a physical address to which Bue went to but never returned, as he suffered a fall injuring his neck and head. After three days, Bue passed away.

Another much-discussed recent case was the lawsuit by a  mother in Florida against Character.AI, arguing that her 14-year-old son committed suicide due to conversations with a chatbot from their platform, which generated an emotional dependency isolating him from the real world. The mother states that there are no appropriate guardrails or safety measures in the avatars, leading young people to become dependent on the platform and manipulable.