HyperAI
Back to Headlines

Study Reveals Patients View AI-Using Physicians More Negatively, Citing Concerns Over Trust and Competence

7 days ago

A new study published in JAMA Network Open by psychologists from Würzburg University reveals that patients have reservations about physicians who use artificial intelligence (AI) in their practice. The research, led by Moritz Reis and Professor Wilfried Kunde from the Department of Psychology III at Julius Maximilian University Würzburg (JMU), in collaboration with Florian Reis from the Institute for Medical Informatics at Charité Berlin, highlights significant negative perceptions of doctors who disclose their use of AI. The study involved over 1,200 participants who were given fictitious advertisements for family doctors. These ads varied only in whether they mentioned the physician's use of AI for administrative, diagnostic, or therapeutic purposes. Participants consistently rated doctors who disclosed AI usage more negatively across all dimensions: competence, trustworthiness, and empathy. Even when the AI was used solely for administrative tasks, the ratings were lower compared to those for doctors who did not mention AI use. Key Findings Perceived Competence: Patients viewed AI-using doctors as less competent, suspecting they might rely too heavily on technology rather than their own expertise. Trustworthiness: There was a notable decrease in trust for physicians who use AI, possibly because patients fear a reduction in personalized care. Empathy: Doctors who disclosed AI usage were seen as less empathetic, potentially due to concerns that AI might detract from the human touch in medical interactions. Appointment Preferences: Participants were less likely to make appointments with AI-using physicians, suggesting practical implications of this negative perception. Implications for Healthcare The study underscores the critical role of the patient-doctor relationship in effective healthcare. A trusting and empathetic connection between patient and physician is essential for successful treatments. With AI becoming more prevalent in medical practices, even minor drops in perceived trustworthiness can lead to significant adverse outcomes. Recommendations To mitigate these negative perceptions, the authors suggest that doctors should proactively address patients' concerns when discussing AI use. Highlighting the potential benefits, such as increased efficiency and more time available for personal patient care, could help rebuild trust. The researchers emphasize that despite the technological advancements, AI can enhance the human element of healthcare, allowing doctors to focus more on interpersonal aspects of treatment. Industry Insights This study aligns with broader trends in healthcare where patient perspectives and preferences play a crucial role. It highlights the need for better communication strategies and transparency in the integration of AI technologies. Companies developing AI for healthcare, such as IBM Watson and Google DeepMind, must consider these findings and work closely with medical practitioners to ensure that AI tools are introduced in ways that maintain and even enhance patient trust and satisfaction. Company Profiles Würzburg University (Julius Maximilian University Würzburg): One of Germany's oldest and most prestigious universities, known for its strong research programs in psychology and other sciences. Charité Berlin: A renowned medical university and hospital in Berlin, Germany, leading in medical research and innovation, particularly in the field of medical informatics. Conclusion The study's findings suggest that while AI holds promise for advancing healthcare, its integration must be carefully managed to address patient concerns and preserve the integrity of the patient-doctor relationship. By emphasizing the complementary nature of AI and human expertise, healthcare providers can help patients understand the value of these technologies and ensure continued trust in the medical profession.

Related Links