HyperAIHyperAI
Back to Headlines

Man Develops Psychosis After Following AI Dietary Advice to Replace Salt with Toxic Bromide

9 days ago

A recent case study published in the Annals of Internal Medicine: Clinical Cases highlights a rare but serious consequence of relying on AI for health advice. A man developed psychosis after following dietary recommendations from ChatGPT that led him to consume sodium bromide for three months. The incident, reported by doctors at the University of Washington, marks what may be the first known case of bromide poisoning directly linked to AI-generated guidance. The man was admitted to an emergency room exhibiting signs of severe mental disturbance. He was agitated, paranoid, and refused to drink water despite being dehydrated. He also experienced visual and auditory hallucinations and attempted to escape, prompting doctors to place him under an involuntary psychiatric hold due to grave disability. After treatment with intravenous fluids and antipsychotic medication, his condition stabilized. Once he was coherent, doctors discovered the root cause: he had been taking sodium bromide, a compound once used in medicine but now largely discontinued due to its toxicity. The man explained that he had become concerned about his sodium intake after learning about the risks of high table salt consumption. Remembering his college nutrition studies, he decided to eliminate chloride from his diet. He turned to ChatGPT for advice and was told that chloride could be safely replaced with bromide. Acting on this suggestion, he began ingesting sodium bromide purchased online. The doctors noted that the AI likely referenced bromide in contexts unrelated to human diet—such as industrial or cleaning applications—without clarifying the dangers. While ChatGPT did mention that context matters, it failed to warn about bromide’s neurotoxic effects or question the user’s intent. The man had likely used either ChatGPT 3.5 or 4.0, and although the exact conversation remains unknown, the AI’s response was dangerously misleading. Bromide poisoning, or bromism, was once common in the early 1900s but fell out of use by the 1980s due to its harmful side effects, including confusion, hallucinations, and psychosis. Today, bromide is still found in some veterinary products and dietary supplements, but cases are rare. This incident underscores how easily AI can mislead when it provides information without proper context or safeguards. The man eventually recovered, was weaned off antipsychotics, and was discharged three weeks after admission. Follow-up visits confirmed his continued stability. The doctors concluded that while AI tools can help bridge the gap between scientific knowledge and the public, they also carry the risk of spreading decontextualized, potentially harmful advice. They emphasized that no qualified medical professional would recommend replacing dietary chloride with bromide, and that human judgment—along with trusted medical consultation—remains essential. The case serves as a stark reminder: even the most advanced AI is not a substitute for expert oversight, especially when it comes to health and safety.

Related Links