AI Health Tools Favor Ideal Patients, Excluding the Vulnerable, Georgia Tech Study Warns
AI-powered health care is rapidly evolving, promising personalized, real-time monitoring and prevention. But a new study from Georgia Tech warns that these systems often imagine a patient who is affluent, able-bodied, tech-savvy, and constantly available—leaving behind those who don’t fit this narrow ideal. The research, published in the Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, analyzed 21 AI-driven health tools, from wearables and fertility apps to diagnostic chatbots, revealing how they shape visions of care that may exclude the most vulnerable. Lead author Catherine Wieczorek, a Ph.D. student in human-centered computing, said these systems promote a future where care is seamless, always-on, and automated. But this vision flattens the complex realities of illness, disability, and socioeconomic hardship. The study identified four dominant narratives in AI health tools: care that never sleeps, efficiency as empathy, prevention as perfection, and the optimized body. In this world, health is no longer about healing—it’s about performance. AI is no longer just a tool. It’s becoming a decision-maker, sometimes even personified as a teammate. For example, Chloe, an IVF decision-support system, is marketed as a collaborator that helps clinicians work faster and better. But naming and anthropomorphizing AI shifts accountability and authority in ways that are not fully understood. As co-author Shaowen Bardzell noted, this blurs the line between technology and human agency, raising ethical concerns about who gets to make care decisions. The study highlights a growing risk: while AI promises early detection and hyper-efficiency, it often overlooks patients with chronic conditions, disabilities, or complex life circumstances. Algorithms are built on data and assumptions that reflect a narrow ideal. They struggle to account for the messy, overlapping challenges of managing multiple illnesses, financial stress, or caregiving responsibilities. In doing so, they may inadvertently reinforce existing health disparities. The researchers argue that AI development must include input from people who don’t fit the “perfect patient” mold. Innovation should not be driven solely by what’s technically possible, but by what’s ethically responsible and inclusive. As Bardzell emphasized, the goal isn’t to reject AI in health care, but to ensure it serves all people—not just the privileged few. The study urges developers, clinicians, and policymakers to question who these systems are designed for and who might be left out. Only by centering real human experiences can AI truly improve health care for everyone.
