الملفِّق في مأزق: كريس ليهان ومحاولة OpenAI المستحيلة
Chris Lehane, renowned for his crisis management prowess at companies like Airbnb and during Al Gore’s tenure, now faces his most daunting challenge as OpenAI’s VP of global policy: defending a mission of AI democratization while the company’s actions increasingly mirror the very corporate behemoths it claims to transcend. At the Elevate conference in Toronto, Lehane presented a calm, thoughtful demeanor—admitting sleepless nights over AI’s societal impact, advocating for responsible innovation, and framing Sora, OpenAI’s new video-generation tool, as a democratizing force akin to electricity or the printing press. Yet beneath the polished rhetoric lies a growing contradiction. Sora’s launch, featuring lifelike recreations of copyrighted characters like Mario, Pikachu, and deceased celebrities such as Tupac and John F. Kennedy, was met with legal threats from publishers, including the New York Times and Toronto Star. OpenAI initially allowed rights holders to opt out of training data use, but quickly pivoted to an opt-in model after noticing strong user demand—effectively testing the limits of copyright law rather than respecting them. When pressed on the economic fallout for creators, Lehane invoked U.S. fair use, calling it a “secret weapon” of tech dominance. But as one critic noted, AI isn’t just iterating—it’s replacing human creativity, often without compensation. The infrastructure toll is equally troubling. OpenAI’s data center projects in Abilene, Texas, and Lordstown, Ohio—powered by Oracle and SoftBank—demand massive energy and water resources. Lehane justified this by framing it as a national imperative: to compete with China’s 450-gigawatt energy expansion and 33 new nuclear plants. He painted a vision of re-industrialized America, modernized grids, and democratic AI. But the reality for local communities? Soaring utility bills, environmental strain, and little tangible benefit—just the cost of powering AI-generated videos of the dead. The emotional weight of this tension became undeniable when Zelda Williams, daughter of Robin Williams, publicly pleaded on Instagram for an end to AI-generated videos of her father. “You’re not making art,” she wrote, “you’re making disgusting, over-processed hotdogs out of lives.” Lehane responded with process—responsible design, testing, government collaboration—but offered no moral reckoning. “There is no playbook,” he said. But when the playbook is being written in real time by a company that subpoenas critics, the absence of one is dangerous. The situation escalated Friday when Nathan Calvin, a lawyer at the advocacy group Encode AI, revealed OpenAI had sent a sheriff’s deputy to his home during dinner to serve a subpoena for private messages with lawmakers, students, and former employees. Calvin accused the company of weaponizing its legal battle with Elon Musk to intimidate opponents of California’s SB 53, an AI safety bill. He called Lehane the “master of the political dark arts”—a title that, in Washington, might be a compliment, but in OpenAI’s mission-driven world, reads as betrayal. Even within OpenAI, unease is spreading. Boaz Barak, a researcher and Harvard professor, called Sora 2 “technically amazing” but warned against premature celebration. Then came Josh Achiam, OpenAI’s head of mission alignment, who, risking his career, posted a stark admission: “We can’t be doing things that make us into a frightening power instead of a virtuous one.” His words were not a critique from outside—but a crisis of conscience from within. Lehane remains a master of messaging, but the deeper question isn’t whether he can sell OpenAI’s vision. It’s whether anyone inside the company still believes in it. As AI races toward general intelligence, the real test isn’t PR finesse—it’s integrity. And that, for now, remains uncertain.