New AI apps use facial scans to predict lifespan and health risks
New AI-powered apps are using facial scans to predict lifespan and assess health risks, raising both excitement and concern in the medical and tech communities. One such tool, FaceAge developed by Harvard Medical School, analyzes selfies to estimate a person’s biological age—a proxy for overall wellness. In a personal test, the app suggested the author looked 27.9 years old, over a decade younger than their actual age, based on a dim, blurry photo. Other images yielded results ranging from 33 to 38 years old, influenced heavily by lighting and image quality. FaceAge focuses on key facial features like the nasal labial folds and temples, where signs of aging—such as fine lines, sagging skin, and reduced collagen—may reflect internal health. According to Dr. Raymond Mak, the app’s creator and a radiologist, a faster increase in biological age compared to chronological age can be a warning sign for future health problems. The long-term goal is to use these insights for early disease detection, personalized treatments, and even predicting longevity. This technology builds on decades of research into how facial features correlate with health. Humans evolved a third type of cone in their eyes to detect subtle skin tones—like rosy cheeks indicating good circulation or greenish hues signaling illness. Plastic surgeon Dr. Bahman Guyuron found that identical twins with different lifestyles showed visible differences in aging, with more stressed or unhealthy twins appearing older. Conversely, centenarians often look decades younger than their age, suggesting their bodies age more slowly at a cellular level. Beyond FaceAge, several other AI tools are emerging. PainChek, an Australian app, monitors facial expressions to assess pain in dementia patients who can’t communicate. Face2Gene helps doctors identify rare genetic disorders from facial features. Some apps even claim to detect signs of autism, PTSD in children, or drowsiness behind the wheel. Despite their promise, experts warn of serious ethical risks. AI systems trained on facial data may reflect biases, misinterpret cultural or gender-based expression differences, or reinforce outdated pseudoscientific ideas like physiognomy—the discredited practice of judging character or behavior by facial features. A 2017 Stanford study that claimed to identify sexual orientation from faces sparked backlash for relying on social cues rather than biology. AI ethics researcher Malihe Alikhani stresses that these tools must be developed responsibly. “AI is entering these spaces fast,” she said. “It’s about making sure that it is safe and beneficial.” She emphasizes the need for transparency, patient consent, and human oversight, especially when AI is used in medical decisions. While facial scanning apps offer a glimpse into the future of preventive healthcare, their accuracy and reliability remain inconsistent. Lighting, photo quality, and individual variation can drastically alter results. As one test showed, the same person could appear 10 years younger or older depending on the image. Ultimately, these tools may one day complement clinical assessments, but they should not replace human judgment. As AI becomes more embedded in medicine, the focus must remain on ethical use, patient empowerment, and scientific rigor—ensuring that facial analysis enhances health, not undermines it.