AI Creators Threaten to Overwhelm the Influencer Economy with Fake Content and Ethical Dilemmas
AI creators are rapidly reshaping the digital landscape, and with it, the future of the influencer economy. Jeremy Carrasco, who launched his social media presence just months ago, has already built a following of over 300,000 on TikTok and Instagram. Unlike traditional influencers, Carrasco isn’t promoting products or lifestyles—he’s exposing the realities of AI-generated content and its growing threat to authentic creators. Initially drawn to YouTube as a creator, Carrasco ended up working behind the scenes as a producer and director. But he shifted gears after noticing a glaring gap in public discourse around generative AI. Most conversations were dominated by tech companies, leaving creators—those who understand storytelling, visuals, and audience engagement—largely unheard. That’s when he launched ShowToolsAI, aiming to empower creators with ethical, practical AI tools. His optimism faded quickly. He realized few people knew how to identify AI-generated videos—those with unnatural textures, jittery eyes, or objects that flicker in and out of existence. This lack of awareness created a vacuum he felt compelled to fill. His videos now focus on spotting AI tells, educating audiences on the limitations and risks of tools like Sora. The stakes are rising. AI video tools are becoming increasingly accessible and powerful. Sora 2, for example, is free and capable of generating convincing clips with audio, lowering the barrier to entry dramatically. While some use it for harmless fun—like AI cats doing absurd stunts—others exploit it for profit. A seven-second clip might not earn much alone, but stitched into a viral compilation, it could generate millions of views and earn creators around $1,000 via platform payouts—significant income for those in developing nations. But not all AI content is benign. Accounts like Yang Mun, a fabricated Chinese medicine influencer with over 1.5 million followers, peddle wellness advice wrapped in AI-generated personas. These are often scams designed to funnel viewers to websites selling AI-written ebooks—content so generic it’s likely entirely synthetic. Even more alarming are cases like Maddie Quinn, where creators’ likenesses are stolen and replaced with AI avatars. Entire identities are replicated without consent and used on platforms like OnlyFans, blurring the line between authenticity and deception. When asked about ethical uses of generative AI in content creation, Carrasco is skeptical. “Generally no,” he says. While he acknowledges niche exceptions—such as accessibility tools or culturally sensitive AI applications—he remains critical of the dominant model: training AI on stolen human data. Even studios like Lionsgate, which attempted to build ethical AI using their own archives, failed due to insufficient data. The platforms themselves are accelerating the crisis. TikTok, Instagram, YouTube, and Facebook allow AI-generated content to flood their feeds without consistent labeling or enforcement. Meanwhile, Meta, Amazon, and DirecTV are rolling out their own AI ad tools. These systems churn out low-quality, synthetic ads at scale—threatening to undercut human creators who rely on sponsorships. Eventually, platforms may bypass creators altogether and sell AI-generated ads directly to advertisers. This shift undermines the entire creator economy. Carrasco warns that while it might seem rational for creators to adopt AI to survive, doing so risks becoming part of the problem—helping to drown out authentic voices with synthetic noise. In the end, the rise of AI creators isn’t just a technological shift—it’s a cultural and economic disruption. Without meaningful regulation, transparency, and ethical boundaries, the influencer economy may not survive the tide of artificial content.
