Can We Label Our Way to Reality in the Age of Deepfakes?
Reality is unraveling, and the tools we thought would save it—like labeling systems for AI-generated content—are failing. At the heart of this crisis is C2PA, a metadata standard promoted by Adobe and backed by giants like Meta, Microsoft, OpenAI, and Google. The idea was simple: embed digital fingerprints into photos and videos at the moment of creation, so platforms and users could easily verify authenticity. But in practice, the system has collapsed under its own contradictions. C2PA was never designed to detect AI content. It was meant to track a photo’s history—when it was taken, what tools were used, who edited it. Yet companies have repurposed it as a shield against deepfakes, slapping on labels like “AI-generated” with little regard for accuracy or consistency. The problem? The metadata can be stripped, altered, or ignored entirely during upload. Platforms like Instagram and X (formerly Twitter) have shown little commitment to preserving it. Even when they do, they often fail to display the information in a way that users can understand. The situation is worse than just technical flaws. Apple, one of the most influential players in the camera ecosystem, has stayed silent. Despite having the power to set a global standard with its iPhone, it hasn’t adopted C2PA or SynthID. Meanwhile, Google’s Pixel phones do include metadata, but it’s not universal across Android. Samsung, Nikon, Sony, and others have added support to new models, but older cameras remain untagged—meaning the vast majority of photos in circulation aren’t verifiable. And then there’s the distribution problem. Platforms like YouTube and TikTok use the standard inconsistently, if at all. YouTube, despite being run by Google and having SynthID, rarely surfaces AI labels. TikTok applies them sporadically. The result? A patchwork of signals that confuse rather than clarify. Users see a label here, nothing there, and are left to guess. Worse still, the labels themselves cause backlash. Creators feel devalued when their work is labeled “AI-generated,” even when AI was used only in minor editing steps. Audiences react with anger, often mistaking the label for a judgment on quality, not origin. This has led to real pushback—some platforms have quietly removed labels altogether, not because they’re ineffective, but because they’re politically toxic. The deeper issue, as Jess Weatherbed points out, is that the entire system rests on a false premise: that we can label our way into a shared reality. But reality isn’t defined by metadata. It’s shaped by trust, context, and shared experience. When the White House shares AI-manipulated images of arrests—showing people crying as they’re taken into custody—those images aren’t just fake. They’re weapons. And platforms don’t label them because doing so would disrupt their own business models, which thrive on attention, engagement, and volume. Meta’s own leadership acknowledges the crisis. Adam Mosseri, Instagram’s head, admitted in a New Year’s Eve post that we can no longer assume photos and videos are accurate. We must now start with skepticism. That’s not a fix—it’s an admission of defeat. There is no technical solution on the horizon. C2PA won’t be the savior. SynthID won’t be enough. Inference tools that detect AI patterns are unreliable and often wrong. The only path forward may be regulation. Governments are beginning to act—laws like the UK’s Online Safety Act signal a shift toward accountability. But until platforms are legally required to preserve metadata, enforce transparency, and face real consequences for spreading misinformation, the system will remain broken. The war on reality isn’t being lost because of technology. It’s being lost because the people who control the tools don’t want to win it. They profit from confusion. And until that changes, labeling won’t save us. We’ll have to build something new—something that doesn’t rely on trust in metadata, but on trust in institutions, laws, and shared truth. Until then, we’re on our own.
