ChatGPT Still Offers Legal and Health Info Despite False Ban Claims
OpenAI has clarified that ChatGPT has not been updated to ban the provision of legal or health advice, despite widespread social media claims suggesting otherwise. Karan Singhal, OpenAI’s head of health AI, confirmed on X that the reports are false and that the chatbot’s behavior remains unchanged. The misinformation originated from a now-deleted post by the betting platform Kalshi, which claimed, “JUST IN: ChatGPT will no longer provide health or legal advice.” Singhal responded directly, stating that the claim is not true and emphasizing that ChatGPT has never been intended to replace professional advice. “It will continue to be a great resource to help people understand legal and health information,” he wrote. Singhal explained that the inclusion of restrictions around legal and medical advice is not a new policy. The updated terms, released on October 29, include a list of prohibited uses, one of which is “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” This language is consistent with OpenAI’s previous guidelines. Earlier versions of the policy had separate rules for different services—such as a universal policy, a ChatGPT-specific one, and an API policy. The new update consolidates these into a single, unified set of rules across all OpenAI products. However, the core principles remain the same: users should not rely on ChatGPT for personalized legal, medical, or financial advice without consulting a qualified professional and acknowledging the use of AI and its limitations. OpenAI’s changelog notes that the update aims to create a consistent policy framework across its platforms, but it does not introduce new restrictions. The company continues to position ChatGPT as a tool for information and education, not as a substitute for expert guidance.
