AI-Powered Messaging Gains Traction in Health Care Amid Adoption Hesitancy
A new study conducted by researchers from NYU Tandon, NYU Langone Health, and the NYU Stern School of Business provides some of the first empirical insights into how generative AI is being used—and resisted—in health care messaging environments. The research highlights both the potential of AI-powered tools to help clinicians manage overwhelming volumes of messages and the significant barriers that remain to widespread adoption. The study focused on communication patterns among health care providers, particularly in primary care and specialty clinics, where physicians and staff routinely receive hundreds of messages per day from patients, colleagues, and automated systems. These messages often include appointment requests, test results, medication inquiries, and urgent clinical alerts—many of which require timely responses. Researchers analyzed real-world messaging data from several health systems and conducted interviews with clinicians to understand how AI tools are being used to draft, prioritize, and respond to messages. Findings show that when properly implemented, AI can significantly reduce the time clinicians spend on routine communication tasks. For example, AI-assisted message summarization and response drafting reduced average response times by up to 40% in pilot programs. Despite these benefits, the study uncovered substantial hesitation among providers. Key concerns included mistrust in AI-generated content, fear of medical errors, lack of transparency in how AI makes decisions, and worries about patient privacy and data security. Many clinicians expressed discomfort with delegating even routine communication to AI, fearing it could lead to miscommunication or undermine patient trust. The research also found that adoption varied widely across specialties and institutions. Providers in high-volume clinics were more open to AI tools, particularly when they were integrated into existing electronic health record systems and designed with clinician input. However, in more traditional or less tech-forward practices, resistance remained strong. The study concludes that while generative AI holds real promise for alleviating clinician burnout and improving care coordination, successful implementation requires more than just technological capability. It demands thoughtful design, robust oversight, clinician engagement, and clear policies around accountability and data governance. The researchers emphasize that AI should be positioned as a supportive tool—not a replacement—for human judgment in health care. They call for more investment in human-centered AI development and greater transparency in how these systems operate in clinical settings.
