New AI Tool AEquity Detects and Reduces Bias in Health Data to Improve Fairness and Accuracy in Medical Algorithms
A team of researchers at the Icahn School of Medicine at Mount Sinai has developed AEquity, a new AI tool designed to detect and reduce bias in health care datasets used to train machine-learning models. The tool aims to improve the accuracy and fairness of AI-driven health technologies, addressing a major challenge that can lead to disparities in diagnosis and treatment. Published in the Journal of Medical Internet Research, the study titled "Detecting, Characterizing, and Mitigating Implicit and Explicit Racial Biases in Health Care Datasets With Subgroup Learnability: Algorithm Development and Validation Study" presents a comprehensive approach to identifying both known and hidden biases in medical data. AEquity was tested on diverse health data types, including chest X-rays, electronic health records, and data from the National Health and Nutrition Examination Survey. Using various machine-learning models, the tool successfully identified biases that could skew algorithm performance across different demographic groups. These biases often stem from underrepresentation of certain populations or differences in how diseases present across racial and ethnic lines. The researchers emphasize that AI systems trained on flawed or unbalanced data can perpetuate and even worsen health disparities. For example, if a model is primarily trained on data from one demographic group, it may perform poorly for others, leading to missed diagnoses or incorrect risk predictions. Faris Gulamali, MD, the study’s first author, said the goal was to create a practical, accessible tool that enables developers and health systems to evaluate and correct bias early in the AI development process. “We want to ensure these tools work well for everyone, not just the groups most represented in the data,” he said. AEquity is designed to be flexible and scalable, compatible with a wide range of machine-learning models—from basic algorithms to complex systems like large language models. It analyzes both input data, such as medical images and lab results, and output predictions, including diagnoses and risk scores. The tool can be used throughout the AI lifecycle, including during development, pre-deployment audits, and ongoing monitoring. Researchers believe it has strong potential for use by developers, academic institutions, and regulators seeking to promote fairness in health care AI. Girish N. Nadkarni, MD, MPH, senior author and Chief AI Officer at Mount Sinai, stressed that while tools like AEquity are essential, they are not a complete solution. “The foundation matters, and it starts with the data,” he said. “We need to rethink how data is collected, interpreted, and applied across diverse populations.” David L. Reich, MD, Chief Clinical Officer of the Mount Sinai Health System, highlighted the broader impact of this work. “By addressing bias at the data level, we’re fixing the root cause before it affects patient care,” he said. “This builds trust in AI and ensures that innovations benefit all communities, not just those best represented in existing data.” The development of AEquity marks a significant step toward creating a more equitable, learning health system—one where AI supports improved outcomes for every patient, regardless of background.
