Age and Gender Bias in AI: How Stereotypes in Training Data Affect ChatGPT’s Resume Rankings and Hiring Perceptions
A new study reveals that age and gender biases embedded in online media and the data used to train large language models like ChatGPT can significantly influence how people are perceived and evaluated. Researchers analyzed hundreds of thousands of images from sources such as IMDb and Google Image Search, along with text used to train AI systems, uncovering a consistent pattern: women are frequently depicted and described as younger than men, reinforcing harmful stereotypes. This age gap in representation isn’t just a cultural observation—it has real-world consequences. The study suggests that such biases may contribute to the gender pay gap, as younger individuals are often assumed to be less experienced or less qualified, especially in professional contexts. When AI systems like ChatGPT are used to screen job applicants, these biases can be amplified. For example, a resume with a name typically associated with a woman might be ranked lower if the model has learned to associate female names with younger, less experienced profiles. The findings highlight a deeper issue: AI models are trained on vast datasets that reflect historical and societal inequalities. If the data consistently portrays women as younger and men as older, the AI will internalize and reproduce these patterns, affecting hiring decisions, promotions, and even public perception. The research underscores the urgent need for more diverse, representative, and ethically curated training data. It also calls for greater transparency and accountability in how AI systems are developed and deployed, especially in high-stakes areas like employment and education. As AI becomes increasingly central to decision-making, addressing these hidden biases is not just a technical challenge but a societal imperative. Without deliberate intervention, AI risks entrenching and even accelerating existing inequalities.
