HyperAIHyperAI

Command Palette

Search for a command to run...

Instagram’s Adam Mosseri counters MrBeast’s AI fears, says society must adapt to blurred lines between real and synthetic content while emphasizing the need for digital literacy in the age of AI-generated media.

Instagram head Adam Mosseri addressed concerns about AI’s impact on creators during a recent appearance at the Bloomberg Screentime conference, pushing back on warnings from popular YouTuber MrBeast, who recently called AI-generated content “scary times” for the creative industry. Mosseri acknowledged the risks but argued that AI will ultimately expand creative opportunities rather than eliminate them. He explained that while AI won’t replace the large-scale productions MrBeast is known for, it will lower the barrier to entry for content creation. “What the internet did was allow almost anyone to become a publisher by reducing the cost of distribution to essentially zero,” Mosseri said. “Now, generative AI looks like it’s going to reduce the cost of producing content to basically zero.” This shift, he believes, will empower people who previously lacked the resources or skills to create high-quality content. Mosseri emphasized that most creators won’t rely solely on AI to replicate existing formats. Instead, they’ll use AI tools as part of a hybrid workflow—such as for color grading, editing, or generating visual effects—rather than producing fully synthetic videos. He predicted that the future won’t be a clear divide between real and AI-generated content, but rather a spectrum where both coexist. On the challenge of identifying AI content, Mosseri admitted Meta’s early attempts to automatically label AI-generated posts were flawed. “It was practically a fool’s errand,” he said, noting that the system mistakenly flagged real content as AI because creators often used AI tools like Adobe’s for legitimate editing tasks. He stressed that better labeling is needed, but cautioned that the platform can’t do it alone. Instead, he suggested that Meta should focus on providing more context—such as through features like Community Notes, its crowdsourced fact-checking system. Rather than relying on third-party experts, Community Notes allows users to add context when there’s consensus on a correction. Mosseri hinted this model could be adapted to help users understand when content may be AI-generated, even if it hasn’t been labeled. But he also placed responsibility on society. “My kids are young. They’re nine, seven, and five,” he said. “I need them to understand that just because they’re seeing a video doesn’t mean it actually happened.” He stressed that future generations will have to learn to question content, evaluate sources, and consider motives—skills that are increasingly vital in an era of deepfakes and synthetic media. Beyond AI, Mosseri discussed Instagram’s evolving priorities, highlighting its focus on Reels and direct messages as core features driven by user behavior. He also commented on TikTok’s recent U.S. ownership changes, saying the app remains largely unchanged in terms of interface, algorithm, and creator ecosystem. “It’s the same app, the same ranking system, the same creators,” he noted. He welcomed the competition, saying TikTok’s presence has pushed Instagram to improve. Ultimately, Mosseri framed AI not as a threat, but as a transformational force that demands new thinking—not just from platforms, but from users, creators, and families alike.

Related Links