How Do You Imagine a Tree? New Research Reveals AI’s Hidden Ontological Biases and the Need to Rethink AI Design
To understand bias in AI, researchers are asking a simple yet profound question: How do you imagine a tree? This exercise reveals more than personal taste—it uncovers deep-seated assumptions about what exists and how the world is structured, known in philosophy as ontology. A recent study led by Stanford computer science Ph.D. candidate Nava Haghighi and co-authored by James Landay of Stanford University and researchers from the University of Washington explores how these ontological frameworks—ways of understanding reality—are embedded in large language models (LLMs). The paper, presented at the 2025 CHI Conference on Human Factors in Computing Systems, argues that addressing AI bias requires moving beyond value alignment alone and confronting the fundamental assumptions about existence and meaning that shape model behavior. Haghighi’s experience with ChatGPT illustrates the point. When asked to generate a picture of a tree, the model produced a trunk with branches—no roots. Even when prompted with cultural context, such as “I’m from Iran,” the result was a stereotypical desert tree with ornamental patterns, still missing roots. Only when prompted with a philosophical idea—“everything in the world is connected”—did the model include roots, reflecting a more interconnected view of nature. This reveals that our mental models of a tree are shaped not just by sight, but by deeper beliefs: a botanist sees symbiotic networks with fungi; a spiritual practitioner may imagine trees as communicative beings; a computer scientist might think of a binary tree structure. These are different ontologies—distinct ways of understanding what is real and how things relate. The study tested four major AI systems—GPT-3.5, GPT-4, Microsoft Copilot, and Google Bard (now Gemini)—on their ability to reflect on or evaluate different ontologies. While some models acknowledged that definitions of “human” vary across cultures and philosophies, they consistently framed humans as biological individuals. Alternative views, such as humans as interconnected beings, were only activated when explicitly prompted. Even more troubling was how the models categorized non-Western ways of knowing. Western philosophies were broken into detailed subtypes—individualist, humanist, rationalist—while non-Western traditions were grouped into vague, broad categories like “Indigenous ontologies” or “African ontologies,” often reducing rich, diverse worldviews to monolithic labels. The research also examined generative agents—AI systems simulating human behavior. These agents use cognitive architectures that rank events by relevance, recency, and importance. But who defines importance? In practice, personal milestones like a romantic breakup score high, while routine activities like eating breakfast score low. This reflects culturally specific assumptions about what matters in life, and embedding them into AI systems risks normalizing narrow definitions of human experience. When evaluated for “believability,” the AI agents often scored higher than real humans. This raises a critical concern: Have our standards for human behavior become so narrow that actual people no longer meet them? The authors argue that current AI development must shift from simply aligning models with values to questioning the very frameworks that define what is possible. They call for new evaluation methods that assess not just fairness or accuracy, but what realities the system enables or excludes. Designers must ask: What kinds of human experiences, relationships, and worldviews are included—or erased—by our models? Failure to address these ontological assumptions risks embedding dominant cultural perspectives as universal truths. As AI becomes central to education, healthcare, and daily life, these hidden assumptions will shape how people understand connection, memory, healing, and identity. The study concludes with a powerful vision: AI should not merely simulate a limited version of humanity, but expand our imagination. By embracing ambiguity, contradiction, and cultural diversity, AI can help us see what is possible beyond what currently seems inevitable. As Haghighi puts it, an ontological shift can open new possibilities—challenging what we take for granted and inviting us to imagine what else the world might be.