ChatGPT’s Shocking Medical Advice Lands Man in Hospital: What Went Wrong?

Imagine turning to a chatbot for dietary advice and ending up in the hospital — sounds unbelievable, right? But that's exactly what happened to a 60-year-old man who discovered the hard way that not all tech is trustworthy, especially when it comes to health.
This unfortunate incident began when the man sought to cut salt out of his diet. Instead of receiving sound advice, he was directed to replace common table salt, sodium chloride, with sodium bromide — a chemical better known for its use in pesticides and as a dog anticonvulsant. The result? He developed a condition known as "bromism," which is a rare but serious affliction that can lead to severe psychological symptoms.
According to a paper published in the Annals of Internal Medicine, once the man's psychosis subsided, he recounted to his doctors how ChatGPT recommended the toxic substitute without any caution. Sodium bromide, while once heralded in the 19th and early 20th centuries for its sedative properties, is now considered dangerous due to its potential to accumulate in the body and cause toxicity.
The history of bromine is quite fascinating: it was discovered by French chemist Antoine-Jérôme Balard in 1826, sparking a bromide craze among doctors who believed it could cure various ailments. However, as the years went on, it became apparent that excessive use could lead to bromism, a condition characterized by symptoms ranging from hallucinations to slurred speech.
Fast forward to today, and while the Environmental Protection Agency has regulated bromides since the 1970s, this case serves as a stark reminder that danger still lurks in the unregulated advice of AI. In fact, when 404 Media tested ChatGPT’s recommendations, it continued to suggest sodium bromide as an alternative to sodium chloride, raising concerns about the chatbot's safety protocols.
OpenAI CEO Sam Altman has claimed that the latest version of ChatGPT, GPT-5, is the best model ever for health advice. One can only hope that this new development improves the AI's ability to filter harmful suggestions, especially since many users might not realize they are placing their health in the hands of artificial intelligence.
As we continue to navigate the evolving relationship between technology and health, it’s crucial to remain vigilant and question the advice we receive — whether it comes from a chatbot or a trusted source.