When AI Goes Wrong: ChatGPT's Health Advice Puts Man in Hospital!

Did you know that following AI advice can land you in the hospital? A 60-year-old man from New York learned this lesson the hard way after turning to ChatGPT for health guidance. What he thought was a simple salt-reduction plan spiraled into a medical nightmare, showcasing the dangers of trusting AI without human oversight.
The man had asked ChatGPT how to eliminate sodium chloride, commonly known as table salt, from his diet. Instead of sound dietary advice, the AI recommended sodium bromide—an outdated compound that can be toxic in large doses. Believing he was making a healthy choice, the man replaced salt with this dangerous alternative for three months, leading to a medical emergency.
Doctors reported that he developed dangerously low sodium levels, a condition known as hyponatraemia, which can cause severe health issues. His family revealed he had relied on the AI-generated health plan without consulting a doctor, allowing him to dangerously cut sodium from his diet.
When he was finally hospitalized, he exhibited alarming symptoms, including hallucinations, paranoia, and extreme thirst. In a state of confusion, he even refused water, fearing it was contaminated. The doctors diagnosed him with bromide toxicity, a condition that was once common in the early 20th century but is now rare. Alongside the psychological effects, he also developed skin eruptions and distinctive red spots, known as cherry angiomas, all signs of bromism.
After spending three weeks in the hospital, where medical staff focused on rehydration and restoring his electrolyte levels, the man eventually recovered. This alarming case, recently published in the American College of Physicians journal, highlights the urgent need for critical thinking when interpreting AI-generated advice, particularly regarding health.
The authors of the study warned of the growing risks of health misinformation emitted by AI systems. They emphasized that while AI tools like ChatGPT can provide general information, they should never take the place of professional medical consultation. OpenAI has clearly stated in its Terms of Use that users should not rely on its outputs as a sole source of truth or as a substitute for professional medical advice.
As AI adoption continues to grow, so too does the responsibility to ensure that its outputs are accurate and understood by users. In these times of rapid technological advancement, the ultimate authority on health should always remain with qualified professionals.