HyperAIHyperAI

Command Palette

Search for a command to run...

AI Enhances Emergency Care Decisions but Acceptance Varies Among Doctors, Study Finds

A new study led by researchers at Drexel University has found that artificial intelligence can improve decision-making accuracy in emergency medical settings, particularly when it provides both synthesized patient information and treatment recommendations. However, physician acceptance of AI recommendations varies significantly, highlighting challenges in integrating the technology into high-pressure clinical environments. The research, conducted in collaboration with clinicians at Children's National Medical Center in Washington, D.C., focused on pediatric trauma resuscitations—critical scenarios where rapid, accurate decisions can determine patient outcomes. Led by Angela Mastrianni, Ph.D., a postdoctoral fellow at NYU Langone Health, and Aleksandra Sarcevic, Ph.D., professor at Drexel’s College of Computing & Informatics, the team developed a prototype AI decision-support tool called DecAide. The tool presented emergency care providers with real-time, AI-synthesized patient data—including age, injury mechanism, and vital signs—highlighting abnormalities and tracking changes through color-coded alerts. Two versions were tested: one offering only synthesized information and another adding AI-generated treatment recommendations with probability estimates based on historical resuscitation data. Thirty-five emergency medicine providers from six health systems participated in a timed virtual simulation involving 12 scripted trauma scenarios. Under three conditions—no AI support, AI information only, and AI information plus recommendations—participants assessed whether life-saving interventions such as blood transfusions, surgeries, or intubations were needed. Results showed that diagnostic accuracy was highest—64.4%—when both AI synthesis and recommendations were provided. Accuracy dropped to 56.3% with information alone and 55.8% with no AI support. Importantly, AI assistance did not slow decision-making; participants often made decisions before the recommendation appeared on screen. Despite improved outcomes, perceptions of AI varied widely. Eighteen participants acknowledged the recommendations but reviewed them after forming their own judgments. Twelve ignored the recommendations entirely, citing lack of transparency, insufficient clinical nuance, or distrust in the underlying data. In contrast, participants expressed fewer concerns about AI-presented information, suggesting they viewed data synthesis as more trustworthy than automated suggestions. The study also tested trust by introducing incorrect recommendations in one out of every eight decisions. Even then, participants maintained relatively high accuracy, indicating that many relied on their own expertise rather than blindly following AI guidance. The findings, presented at the ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW 2025) and published in the Proceedings of the ACM on Human-Computer Interaction, underscore a key challenge: while AI can enhance clinical decision-making, its success depends on how it is designed and integrated. Providers value transparency, context, and the preservation of professional autonomy. The researchers recommend expanding the study to include larger, more diverse groups across different medical specialties and hospital types. They also stress the need for clear implementation policies, training, and support for hospital leaders to guide the responsible adoption of AI in emergency care. As AI continues to evolve, the study highlights that its most effective use in medicine may not be in replacing clinicians, but in supporting them—when designed with trust, clarity, and clinical reality in mind.

Related Links

AI Enhances Emergency Care Decisions but Acceptance Varies Among Doctors, Study Finds | Trending Stories | HyperAI