HyperAI
Back to Headlines

Study Finds People Struggle to Get Reliable Health Advice from AI Chatbots

13 hours ago

People Struggle to Get Effective Health Advice from Chatbots, Study Reveals Long waiting lists and rising costs in healthcare systems have prompted many individuals to seek medical guidance through AI-powered chatbots, such as ChatGPT. A recent survey indicates that approximately one in six American adults consults chatbots for health advice at least once a month. However, putting too much faith in these chatbots can be hazardous, partly due to the difficulty users face in providing essential information for accurate health recommendations, according to a recent Oxford-led study. "The study uncovered a significant communication gap," explained Adam Mahdi, director of graduate studies at the Oxford Internet Institute and a co-author of the research. "Participants who used chatbots did not make better decisions compared to those who relied on conventional methods like online searches or personal judgment." To conduct the study, researchers recruited around 1,300 people in the U.K. and presented them with medical scenarios crafted by a team of doctors. Participants were asked to identify potential health conditions in these scenarios and determine appropriate courses of action using chatbots, as well as their usual methods. The chatbots used included the default AI model behind ChatGPT (GPT-4), Cohere’s Command R+, and Meta’s Llama 2. Notably, the chatbots not only reduced the likelihood of correctly identifying a relevant health condition but also increased the risk of underestimating the severity of the identified conditions. Mahdi highlighted that participants often omitted crucial details when consulting the chatbots or received responses that were difficult to understand. "The chatbot outputs frequently contained a mix of useful and misleading recommendations," he added. "Current evaluation methods for these systems do not consider the complexities of real-world human interactions." Tech companies are increasingly promoting AI as a tool to enhance health outcomes. For instance, Apple is reportedly developing an AI tool to offer advice on exercise, diet, and sleep. Amazon is exploring AI to analyze medical databases for social determinants of health, while Microsoft is involved in creating AI to prioritize patient messages to care providers. Despite this enthusiasm, both healthcare professionals and patients remain divided about the readiness of AI for high-risk health applications. The American Medical Association advises against physicians using chatbots like ChatGPT for clinical decision-making, and major AI companies, including OpenAI, caution against diagnosing conditions based on chatbot responses. "We strongly recommend relying on trusted sources of information for healthcare decisions," Mahdi emphasized. "Just like new medications require rigorous clinical trials, chatbot systems should undergo thorough real-world testing before deployment." In essence, while AI-powered chatbots show promise in assisting with minor health queries, they currently fall short when it comes to delivering reliable and comprehensive medical advice. Users are urged to exercise caution and consult verified healthcare resources or professionals for critical health decisions.

Related Links