TikTok Introduces User Controls for AI-Generated Content Visibility
TikTok is introducing a new feature that gives users greater control over the amount of AI-generated content they see in their “For You” feed. The update, part of the app’s “Manage Topics” tool, allows users to adjust how much AI-generated content (AIGC) appears in their feed using a simple slider. This addition joins existing topic-based controls for categories like Dance, Sports, and Food & Drinks, enabling users to tailor their experience without removing content entirely. The feature is currently in testing and will roll out over the coming weeks. The move comes amid growing use of AI tools across social media, with platforms like Meta and OpenAI launching AI-centric features. Meta introduced Vibes, a feed for AI-generated short videos, shortly before OpenAI unveiled Sora, a platform for creating and sharing realistic AI videos. Since then, AI-generated clips—often depicting celebrities, historical figures, or fictional scenarios—have appeared on TikTok, sparking concerns about authenticity and misinformation. To address these concerns, TikTok is enhancing its AI content labeling systems. While it already requires users to tag AI-generated videos and uses C2PA’s Content Credentials—a cross-industry standard that embeds metadata to identify AI content—it acknowledges that these labels can be stripped away when videos are reuploaded or edited. To combat this, TikTok is testing “invisible watermarking,” a technology that embeds detection markers only readable by TikTok’s own systems. These watermarks will be applied to content created using TikTok’s AI tools, such as AI Editor Pro, and to videos already tagged with C2PA credentials. This layered approach aims to improve TikTok’s ability to accurately identify AI content, even after it’s shared elsewhere. By combining visible metadata with invisible digital fingerprints, the platform hopes to maintain transparency and accountability across its ecosystem. In parallel, TikTok is launching a $2 million AI literacy fund to support educational initiatives focused on AI safety and awareness. The fund will partner with organizations like Girls Who Code to develop resources that help users understand how AI works, recognize synthetic media, and navigate digital content responsibly. The new AI content control reflects TikTok’s broader effort to balance innovation with user trust. As AI-generated content becomes more prevalent and realistic, platforms face mounting pressure to ensure transparency and user agency. By empowering users to customize their feeds and strengthening detection technologies, TikTok aims to foster a safer, more informed environment. While the AI slider is currently limited to testing, its rollout signals a shift toward user-centric AI governance. As other platforms explore AI-only feeds or AI-first experiences, TikTok’s approach offers a middle ground—prioritizing choice, clarity, and control. The success of this initiative could influence how social media companies handle AI content in the future, especially as regulatory scrutiny grows around synthetic media and digital authenticity. Ultimately, TikTok’s strategy underscores a growing industry trend: rather than banning AI content, platforms are focusing on labeling, transparency, and user empowerment. With invisible watermarks, improved detection, and education efforts, TikTok is positioning itself as a leader in responsible AI integration on social media.
