Chris Lehane’s Spin vs. OpenAI’s Reality: Can a Crisis Manager Save a Company at Odds with Its Own Mission?
Chris Lehane is a master of damage control. From managing Al Gore’s image during the 2000 election fallout to navigating Airbnb’s global regulatory storms, he’s built a reputation as one of the most skilled political communicators in tech. Now, as OpenAI’s VP of global policy, he’s tasked with a near-impossible mission: convincing the world that OpenAI is a force for good in the AI revolution, even as its actions increasingly mirror the very corporate giants it once claimed to oppose. I met Lehane briefly at the Elevate conference in Toronto, where he spoke with the calm confidence of someone who’s rehearsed every line. He acknowledged sleepless nights, worries about AI’s long-term impact, and the weight of responsibility. He sounded sincere—perhaps too sincere. But sincerity doesn’t erase contradictions. Take Sora, OpenAI’s new video generation tool. Launched with an invite-only model, it quickly climbed the App Store charts, fueled by users creating AI avatars of themselves, pop culture icons, and, most controversially, deceased celebrities like Tupac and John Lennon. The tool’s training data included vast amounts of copyrighted material—images, videos, performances—without clear consent. Initially, OpenAI offered an opt-out system, but when user engagement spiked, it quietly pivoted to a model that effectively assumed permission. That’s not evolution—it’s exploitation by design. When pressed on whether creators were being fairly compensated, Lehane invoked fair use, the legal doctrine that allows limited use of copyrighted material without permission. He called it America’s “secret weapon” in tech. But fair use is not a free pass. It’s a narrow exception, not a business model. And for publishers, artists, and families of the deceased, it feels like theft. Lehane’s most revealing moment came when I challenged him: “Isn’t this a replacement, not a democratization?” He paused. “We’re all going to need to figure this out,” he said. “It’s glib to say we’ll fix it later, but I think we will.” That admission—“we’re making it up as we go”—was the closest he came to honesty. Then there’s the infrastructure. OpenAI is building massive data centers in economically struggling towns like Lordstown, Ohio, and Abilene, Texas. These facilities consume enormous amounts of water and electricity. Lehane framed this as progress—modernizing energy grids, re-industrializing America. But for residents, it may mean skyrocketing utility bills and strained local resources. Is OpenAI’s vision of “democratizing AI” really about shared benefit—or just relocating environmental and economic burdens? The emotional toll is real. Zelda Williams, daughter of Robin Williams, spent days pleading on Instagram for people to stop sharing AI-generated videos of her father. “You’re not making art,” she wrote. “You’re making over-processed hotdogs out of human lives.” When I asked Lehane about this, he spoke of “responsible design” and “testing frameworks”—processes that feel like smoke screens when real harm is being done. Then came Friday’s bombshell. Nathan Calvin, a lawyer at Encode AI, revealed that OpenAI had sent a sheriff’s deputy to his home in Washington, D.C., during dinner to serve a subpoena. The target? Private messages with lawmakers, students, and former OpenAI employees. Calvin accused the company of using its legal muscle to intimidate critics, particularly in opposition to California’s SB 53, an AI safety bill. He called Lehane the “master of the political dark arts”—a label that, in Washington, might be a compliment. But for a company that claims to serve humanity, it’s damning. Even OpenAI insiders are uneasy. Boaz Barak, a researcher and Harvard professor, called Sora 2 “technically amazing” but warned that OpenAI risks repeating the mistakes of social media and deepfakes. Then came Josh Achiam, head of mission alignment, who, in a rare public statement, wrote: “We can’t be doing things that make us into a frightening power instead of a virtuous one.” He acknowledged the risk to his career—yet still spoke. That’s the real story. The best crisis manager in tech can’t fix a company that’s losing its soul. Lehane may be able to spin the narrative, but if OpenAI’s own people no longer believe in its mission, then no amount of messaging will save it. The question isn’t whether Lehane can sell the vision. It’s whether anyone still does.