HyperAIHyperAI

Command Palette

Search for a command to run...

AI Outperforms Nurses in Identifying Most Urgent Cases but Falls Short Overall, Study Finds

Doctors and nurses outperform artificial intelligence in triaging patients in emergency departments, according to research presented at the European Emergency Medicine Congress. The study, led by Dr. Renata Jukneviciene from Vilnius University in Lithuania, found that while AI can assist in certain aspects of emergency care, it should not be used alone for patient triage. The research involved 51 nurses and six emergency medicine doctors from Vilnius University Hospital Santaros Klinikos. Participants were asked to triage 110 real clinical cases selected from PubMed using the Manchester Triage System, which categorizes patient urgency into five levels. The same cases were evaluated by ChatGPT version 3.5. Results showed that doctors achieved the highest accuracy at 70.6%, followed by nurses at 65.5%, while AI scored only 50.4%. In identifying urgent cases—measured by sensitivity—doctors scored 83.0%, nurses 73.8%, and AI 58.3%. This indicates that AI missed more critical cases than human clinicians. Interestingly, AI performed better than nurses in the most urgent triage category, showing higher accuracy (27.3% vs. 9.3%) and specificity (27.8% vs. 8.3%). This suggests AI may be more cautious in flagging life-threatening conditions, which could help prioritize the most critical patients but also leads to over-triage—labeling too many cases as urgent. In cases requiring surgery, doctors scored 68.4%, nurses 63%, and AI only 39.5%. For non-invasive treatments, doctors scored 65.9%, nurses 44.5%, and AI 51.9%, outperforming nurses in this area. Dr. Jukneviciene concluded that AI should not replace clinical judgment but could serve as a decision-support tool—particularly in busy or understaffed emergency departments. It might help new or less experienced staff and improve consistency in identifying the most urgent cases. However, over-triage could strain resources, so human oversight remains essential. The study had limitations, including a small sample size, a single-center setting, and the use of a non-medical-trained AI model in a non-clinical environment. Real-time patient interaction, vital signs, and follow-up data were not available. Despite these constraints, the study’s strengths include the use of real clinical cases, a multidisciplinary team, and relevance to current challenges like emergency department overcrowding and staff shortages. Dr. Barbra Backus, chair of the EUSEM abstract selection committee and an emergency physician in Amsterdam, emphasized that AI can be valuable in areas like imaging interpretation but should not replace trained medical professionals. She stressed the need for cautious implementation and ongoing evaluation as AI evolves. The research team plans to expand the study using more advanced AI models, including those fine-tuned for medical use, and to test AI in larger groups, with added components like ECG interpretation and training for mass casualty incidents. The study’s abstract, titled “Patient triaging in the ED: can artificial intelligence become the gold standard?” was presented on 30 September during the AI/Innovations session at the congress.

Related Links