HyperAIHyperAI

Command Palette

Search for a command to run...

From Dry Cleaners to the Queen Elizabeth Engineering Prize, Fei-Fei Li Defies the Silicon Valley Tech Myth, Focusing on the Dehumanizing Risks of AI.

Featured Image

In the spring of 2025, Professor Fei-Fei Li, a Bachelor of Science in Physics from Princeton University and a PhD in Computational Neuroscience from Caltech, was awarded the Queen Elizabeth Prize for Engineering, an award often referred to as the "Nobel Prize of Engineering." The jury recognized Li's foundational work in computer vision and deep learning.Her research is considered to have "allowed machines to see the world in a way that is close to human perception for the first time."

"Engineering is not just about computing power and algorithms, but also about responsibility and empathy," Fei-Fei Li emphasized in her acceptance speech. She stressed that technological breakthroughs do not equate to improved understanding. Regarding the era of accelerated AI, she remains vigilant: while algorithms are reconstructing language, images, and knowledge systems, they are also reshaping social power structures and human self-perception. The greatest risk of AI lies in "dehumanization," she wrote in the preface to her memoir, *The Worlds I See*."If artificial intelligence forgets the value of humanity, it will lose its reason for existence."

In Silicon Valley's industrial narrative, Fei-Fei Li's dissenting voice is exceptionally rare. Rather than emphasizing scale and speed, she focuses more on the social structure and ethical foundations behind intelligence:As machines gain a better understanding of humans, will humans still truly understand themselves?Li Feifei's story goes beyond scientific achievements; it's more about the humanistic discourse from a non-mainstream minority. How to bring AI technology back to a human-centered approach is the question she truly wants to answer, beyond awards, honors, and accolades.

Photo of Li Feifei receiving the award

As a "marginalized individual," she chose to detach herself from grand narratives.

Li Feifei was born in Beijing in 1976. Her father was a physicist and her mother an engineer. At the age of 12, she immigrated to New Jersey, USA, with her parents, speaking almost no English. Life was very difficult in the early days after their immigration. Her parents supported themselves by working in a dry cleaner and a restaurant. She diligently studied English while working part-time at the restaurant and her parents' dry cleaner in her spare time to supplement the family income. In the interview,Li Feifei recalled, "Life was really hard for immigrants or immigrant families."This experience also became the marginal psychological basis of her later "immigrant consciousness": in the Western environment, as an "other", Fei-Fei Li witnessed the prosperity of the American technology system, but also experienced the inequality of the social structure.
The "other" in female identity refers to those who are placed outside the mainstream/subject in power structures, social narratives, and cultural constructions, and are observed, defined, marginalized, or othered through the "woman" identity. It originates from the Western philosophical concept of Other/Otherness and has since been widely used in gender studies.

In 2000, Fei-Fei Li pursued a PhD in Computational Neuroscience at Caltech, focusing her research on the intersection of visual object recognition and artificial intelligence (Visual Object Recognition and the Brain). This interdisciplinary training made her realize that "vision" is not only a problem of perception, but also a problem of understanding:Can machines understand the world through experience, context, and memory, just like humans do?This thought became the foundation for her later proposal of the ImageNet project.

Li Feifei's doctoral dissertation

In 2007, while teaching at Princeton University, Fei-Fei Li and her research team launched the ImageNet project, which later had a profound impact. In her 2009 paper, "ImageNet: A Large-Scale Hierarchical Image Database," Li noted that at the time, most computer vision algorithms heavily relied on handcrafted features and small datasets, making the idea of "data-driven deep learning" quite controversial. However, her persistence proved unwavering. As the technological paradigm of AI quietly shifted, the large-scale data-driven approach, once considered a "risky gamble" in academia, eventually became the mainstream consensus.

As Venturebeat pointed out in its report,The "data-driven paradigm" promoted by Fei-Fei Li has changed the development path of computer vision and even the entire AI."After the ImageNet competition in 2012, the media quickly focused on the trend of deep learning. By 2013, almost all computer vision research had shifted to neural networks."

VB reports on the development of deep learning

Thus, as the AI craze arrived, this scientist, who had struggled on the margins of immigration, was finally thrust into the center of the era.

However, despite her research laying the foundation for the era of deep learning, Fei-Fei Li has never fully integrated into the Silicon Valley-dominated technological narrative: her unique perspective, given by her marginal status, has always allowed her to maintain a cool distance from the global AI craze.

In the mainstream narrative of Silicon Valley, AI is portrayed as a core issue in technological competition, capital games, and national strategy. However, Fei-Fei Li chooses to re-examine this system from a humanistic and ethical perspective. She has pointed out on multiple public occasions that...The development of AI is being over-commercialized and militarized. Research resources and social imagination are focused on "larger models" and "stronger computing power," while the social consequences of the technology are being ignored.

In 2019, Fei-Fei Li returned to Stanford and co-founded the Stanford Institute for Human-Centered Artificial Intelligence (HAI) with Marc Tessier-Lavigne, John Etchemendy, and others. This institute reintegrates ethics, the public sector, and vulnerable groups into the technical design of AI, explicitly including a core principle in its mission statement:AI must serve the broadest interests of humanity.

In an interview published by HAI, Fei-Fei Li stated frankly, "I am not a typical tech elite. I am an immigrant, a woman, an Asian, and a scholar. These identities have given me a unique perspective and viewpoint."The future impact of artificial intelligence is so profound that we must maintain our autonomy.We must choose how to build and use this technology. If we relinquish our autonomy, we will be in freefall.

An interview with Fei-Fei Li by the Stanford HAI Institute

Fei-Fei Li warns of the risk of "dehumanizing AI" in opposition to the Silicon Valley tech myth.

Unlike the mainstream narrative in Silicon Valley,Li Feifei continues to advocate the concept of "AI4Humanity".She emphasized the importance of incorporating social values and ethics into the considerations of technological development. She cautioned against the potential "dehumanization" risks associated with technological progress, stressing that AI should be human-centered and that technology must align with human needs and values.

In 2018, when facing Project Maven, a military drone image recognition project developed by Google in cooperation with the U.S. Department of Defense, Fei-Fei Li made her opposition to the militarization of AI clear in an email: "AI should benefit mankind, and Google cannot let the public think that we are developing weapons."

Wired's report on Fei-Fei Li's AI4Humanity

In an interview with Issues, Fei-Fei Li also spoke frankly about the potential risks of AI: "The impact of AI technology is a double-edged sword. For society, this technology can cure diseases, discover drugs, find new materials, and create climate solutions. At the same time, it may also bring risks, such as the spread of misinformation and drastic changes in the labor market."

Issues: Interview with Li Feifei

In fact, to further limit the risks of AI,Li Feifei has repeatedly emphasized in public the necessity of establishing an AI ethics oversight mechanism.In an interview with McKinsey & Company, Fei-Fei Li calmly stated that establishing a regulatory mechanism based on a legal system is extremely urgent. "Rationally speaking, this is essential for humanity when it makes new inventions and discoveries. This mechanism will be achieved in part through education. We need to make the public, policymakers, and decision-makers understand the power, limitations, and facts of this technology, and then integrate regulations into it. The regulatory framework will ensure its enforcement and implementation through laws."

McKinsey & Company interview with Fei-Fei Li

Meanwhile, to promote the driving role of education in AI ethical regulation, at the Semafor Tech event in San Francisco in May 2025,Li Feifei also called on the Trump administration to reduce its intervention in university finances.In a recent crackdown on immigration, the Trump administration cut billions of dollars in university research funding and revoked the visas of thousands of students. In response, Fei-Fei Li stated that as global technological competition intensifies, sanctioning research institutions poses potential risks to the ethical development of AI.

"The public sector, especially higher education, has always been a key component of the U.S. innovation ecosystem and a vital part of our economic growth. Almost all of the classic knowledge we know about artificial intelligence comes from academic research, whether it's algorithms, data-driven approaches, or early microprocessor research," said Fei-Fei Li. "The government should continue to provide sufficient resources for higher education and the public sector to conduct this kind of innovative, uninhibited, curiosity-driven research, which is crucial for the healthy development of our ecosystem and for nurturing the next generation."

In addition, Li Feifei stated frankly that the visa quotas imposed by the United States on citizens of certain countries have always been a problem for many talented individuals to retain their jobs."To be honest, I hope my students can obtain work visas and find pathways to immigration."

Semafor's coverage of Semafor Tech events

In short, despite Silicon Valley's fervent technological optimism, Fei-Fei Li has always maintained a reflective stance, wary of the risk of AI "dehumanizing." "Many people, especially in Silicon Valley, are talking about increasing productivity, but increased productivity doesn't mean everyone can share in the prosperity."We must recognize that AI is merely a tool; the tool itself has no inherent value. The value of a tool ultimately stems from human value.

She insisted that a human-centered approach to artificial intelligence is necessary from the individual, community, and societal perspectives. "We need a human-centered framework, with concentric circles of responsibility among individuals, communities, and society to ensure the shared commitment that AI should improve human well-being."

Based on edge experience, interpreting the opportunities and burdens of complex ecological niches.

Facing multiple marginalized identities as a woman, immigrant, Asian, and academic, Fei-Fei Li acknowledges that these experiences have greatly influenced her research and advocacy. In an interview with HAI, Li mentioned that it is precisely these marginalized experiences that have given her a perspective on new technologies that is drastically different from those of children who grow up in more stable environments and are exposed to computers from the age of five, enabling her to continuously recognize the structural biases in the technological system.

"Science explores the unknown, just as immigration explores the unknown. Both are journeys filled with uncertainty, and you must find your own guiding light. In fact, I think that's exactly why I want to work on human-centered artificial intelligence."My immigration experience, my dry cleaning business, my parents' health—everything I've experienced is deeply rooted in human nature. This has given me a unique perspective and viewpoint.Li Feifei spoke frankly.

However, the insights gained from her marginalized identity are also accompanied by misunderstandings, controversies, and pressures. As one of the most influential women in the global technology field, Fei-Fei Li is often portrayed by the media as the "AI Godmother," but she has expressed her discomfort with this symbolism on many public occasions and her weariness of being called a "female role model."

"I don't really like being called the AI Godmother."In her Axios report, Fei-Fei Li noted that the tech industry's expectations of women are overly symbolic, leading female scientists to often be burdened with a "role-based imagination": women are frequently invited to tell "inspirational stories," are required to represent diversity, breakthroughs, and hope, but are not seen as ordinary scientists, researchers, or policymakers, and are not expected to participate equally in core technology and strategic discussions.

"But I do want to acknowledge the contributions of women because they are often overlooked in the history of science. I hope there will be more than one godmother in the field of AI," Fei-Fei Li further stated. She added that the real challenge is to make gender diversity the norm in the industry. To gradually move this ideal towards reality, she launched the AI4All education program at Stanford University, aimed at supporting women and minorities in entering the field of AI.

Axios's report on Li Feifei

also,Li Feifei's ethnic identity seems to have drawn more attention to the racial issues surrounding her research findings.

While ImageNet is considered a cornerstone of computer vision research, its "people" subtree has long been criticized by academia and the media. As early as 2019, The Art Newspaper reported on concerns about ImageNet's potential racist tendencies, arguing that the database frequently assigned white people to labels that were significantly inaccurate. For example, artist Trevor Paglen and researcher Kate Crawford gave the dataset unfriendly reviews online after using it. "An editor at The Verge was categorized as a pipe smoker and flight attendant, and other social media users reported being described with racist and other highly offensive terms."

Although a large number of negative reviews prompted the ImageNet team to discuss cleaning and reconstructing the dataset, deleting approximately 600,000 photos, the assumption of ImageNet as a "neutral cornerstone" is still being questioned.

The Art Newspaper's coverage of the negative news surrounding ImageNet

At the same time, Fei-Fei Li's minority views have forced her to stand in the gray area between the Silicon Valley mainstream and the general public, and her role in the AI industry has therefore sparked continuous controversy.

"She is a key figure behind the booming development of AI today, but not all computer scientists agree that her idea of a giant visual database is correct," wrote AP contributor Matt Obrien in a column.Regarding the issues of "human-centeredness" and "AI ethics" that a minority of scientists, such as Fei-Fei Li, have focused on, some researchers have long criticized the extreme risk theory behind them as religious propaganda.For example, Palantir's Chief Technology Officer, Shyam Sankar, stated that he has never believed in the "AI doomsday" narrative, and that the possibility of AI bringing catastrophic consequences is extremely low; he believes it is a rumor spread by "transhumanists."

"The threat theory is just a fundraising gimmick," Sankar dismissed. "Companies at the forefront of development can use this to attract investment."

AP Reports on Li Feifei

However, some commentators believe that Fei-Fei Li's contribution to the trend of combining technology and capital is misaligned with her research vision: despite her long-standing emphasis on "human-centeredness" and opposition to the excessive commercialization of AI, as the former chief scientist of Google Cloud AI, she inevitably promoted the industrialization of AI.

therefore,As a leading figure in "human-centered AI" and a shaper of commercial AI infrastructure,Li Feifei is placed in a somewhat delicate crossroads.

Business Insider report

In short, within the AI myth, Fei-Fei Li's stance reflects the complex interaction between scientists, algorithms, and human values, and has long served as a tense and cautionary tale.Can technology exist independently of social, ethical, and humanistic considerations? And how can a balance be struck between rapid commercialization and long-term social responsibility?Fei-Fei Li's questioning of "human-centered AI" remains an unresolved challenge beyond Silicon Valley's narrative of technological worship.

Reference Links:
1.https://www.businessinsider.com/palantir-shyam-sankar-skeptical-ai-jobs-2025-10
2.https://apnews.com/article/ai-pioneer-feifei-li-stanford-computer-vision-imagenet-702717c10defd89feabf01e6c1566a4b
3.https://www.wired.com/story/fei-fei-li-artificial-intelligence-humanity/
4.https://www.theartnewspaper.com/2019/09/23/leading-online-database-to-remove-600000-images-after-art-project-reveals-its-racist-bias