Former OpenAI and xAI Engineer Quits Due to Burnout, Returns to Vietnam to Heal
A former OpenAI and xAI engineer has announced his departure from the frontier of artificial intelligence, citing burnout and a need to return to his home country of Vietnam for personal recovery. Hieu Pham, who played a key role in developing some of the world’s most advanced AI systems, shared his decision in a candid post on X, describing his time at both companies as a "once-in-a-lifetime experience." “I have helped create extremely intelligent entities that will meaningfully improve our lives. The work makes me proud,” Pham wrote. But the intense pace of innovation came at a steep personal cost. “I cannot believe I would say this one day, but I am burnt out,” he admitted. He went on to describe the toll on his mental health: “All the mental health deteriorating that I used to scoff at is real, miserable, scary, and dangerous.” Pham joined xAI in August 2024, then moved to OpenAI in August of the following year. According to his LinkedIn profile, he spent roughly seven months at OpenAI before stepping down. He plans to return to Vietnam with his family, where he intends to “try something new, and also search for a cure for my conditions.” “I hope I will heal. Until then,” he added. Pham’s exit is part of a growing trend of AI researchers stepping back from high-pressure environments at leading labs. In recent weeks, several prominent figures have announced departures, many citing similar concerns about work culture and long-term well-being. Mrinank Sharma, who led Anthropic’s Safeguards Research Team, recently left the company, writing on X that the industry must grow its wisdom alongside its technological power. “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” he said. Business Insider’s Jacob Silverman reported that a pattern is emerging: top researchers are publicly reflecting on their unease with the technology they helped build, often questioning the ethical and human costs of the rapid pace of development. Dylan Scandinaro, who left Anthropic to join OpenAI as head of preparedness, acknowledged the dual nature of AI’s trajectory: “AI is advancing rapidly. The potential benefits are great — and so are the risks of extreme and even irrecoverable harm.” Some researchers have compared the culture at leading AI labs to China’s infamous “996” work schedule—9 a.m. to 9 p.m., six days a week. Nathan Lambert, a senior research scientist at the Allen Institute for AI, described the environment on the Lex Fridman Podcast as one where long hours are not just expected but often self-imposed. “That’s what OpenAI and Anthropic are like,” he said, noting that many employees, especially engineers, embrace the grind out of passion for the work. Still, he acknowledged the toll it takes.
