HyperAI超神経
Back to Headlines

Proactive Learning Strategies Improve AI Model Reliability and Fairness in Hospital Settings

3日前

Artificial intelligence (AI) models used in hospitals must accurately reflect real-world patient data to avoid causing harm. A recent study published in JAMA Network Open by researchers at York University highlights the importance of proactive, continual, and transfer learning strategies to mitigate data shifts and ensure the safety and efficacy of AI systems in clinical settings. The study involved building and evaluating an early warning system to predict in-hospital patient mortality across seven large hospitals in the Greater Toronto Area, using data from GEMINI, Canada's largest hospital data sharing network. The study analyzed 143,049 patient encounters, encompassing various factors such as lab results, transfusions, imaging reports, and administrative features. Key findings include significant data shifts between model training and real-life applications, particularly in demographics, hospital types, admission sources, and critical laboratory tests. For example, models trained on data from community hospitals did not perform well when applied to academic hospitals, but the reverse was not true. Professor Elham Dolatabadi, the senior author from York University's School of Health Policy and Management, emphasizes the need for reliable and robust AI models that can handle changing data over time. She notes that variations in patient subpopulations, staffing, resources, and healthcare policies can lead to data shifts, which can make the models ineffective or even harmful. To address these issues, the researchers employed two main strategies: transfer learning and continual learning. Transfer learning involves applying knowledge gained from one domain to another related domain, which proved beneficial when models specific to hospital type were used instead of a generalized model for all hospitals. Continual learning, on the other hand, updates the AI model with a continuous stream of data, triggered by alarms indicating data drift. This approach was particularly effective during the COVID-19 pandemic, preventing harmful data shifts and improving model performance over time. The study also identified biases within AI models that could lead to unfair outcomes for certain patient groups. These biases can arise from differences in the data used for training, such as varying demographics and medical practices. By detecting and assessing these data shifts, the researchers proposed strategies to mitigate their negative impacts, demonstrating a practical pathway from theoretical promise to real-world application. First author Vallijah Subasri, an AI scientist at University Health Network, summarized the findings by stating that a proactive, label-agnostic monitoring pipeline incorporating transfer and continual learning can effectively detect and mitigate harmful data shifts in Toronto's general internal medicine population. This ensures robust and equitable clinical AI deployment in real-world settings. Industry insiders and experts in AI and healthcare laud the study’s methodology and findings. The proposed strategies for continuous and transfer learning highlight the need for adaptive and nuanced approaches to AI model deployment. These methods not only enhance model reliability and fairness but also underscore the importance of ongoing monitoring and adjustments to safeguard patient outcomes. York University, home to the School of Health Policy and Management and the Vector Institute, is committed to advancing research in health technology and AI. The Vector Institute, a renowned center for AI research and development, provided crucial support for this study, further emphasizing the institution's dedication to bridging the gap between AI theory and practical healthcare applications. This study represents a significant advancement in the field, offering practical solutions to a critical challenge in the deployment of AI in hospitals. As the use of AI in medical applications continues to grow, these findings will play a vital role in ensuring that AI models are both safe and effective, ultimately benefiting patient care and hospital efficiency.

Related Links