Meta sues AI 'nudify' app for platform ads違反規定
Meta has taken legal action against Joy Timeline, a Hong Kong-based company responsible for advertising generative AI apps that digitally "undress" people without their consent. The lawsuit, filed in Hong Kong, aims to prevent Joy Timeline from placing ads for these apps across Meta's platforms, including Facebook, Messenger, Instagram, and Threads. This move follows a CBS News investigation published last week, which uncovered hundreds of such ads on Meta’s platforms. Meta has emphasized its commitment to protecting its community from this type of abuse and stated that it will continue to take necessary steps, including legal action, to address the issue. Joy Timeline, the company behind CrushAI, has made repeated attempts to bypass Meta's ad review process. Despite Meta removing multiple ads and blocking associated URLs, Joy Timeline set up numerous new advertiser accounts and frequently changed domain names to evade detection. These actions highlight the challenges faced by Meta in enforcing its policies as exploitative AI apps continually seek new methods to remain active. According to Alexios Mantzarlis, the author of the Faked Up newsletter, CrushAI ran over 8,000 ads for its AI undresser services on Meta’s platforms in early 2025. Mantzarlis reported that CrushAI’s websites drew approximately 90% of their traffic from Facebook and Instagram. Many of the ads targeted men aged 18 to 65 in the US, UK, and European Union, with a significant focus on using the app against women and female celebrities. The widespread availability of these apps increases the risk of blackmail and sextortion, and often lands in the hands of minors. Meta is not alone in grappling with this issue. Other social media platforms, such as X (formerly Twitter), Reddit, and YouTube, have also seen a rise in ads for AI undressing tools. The increasing popularity of generative AI has introduced new challenges in moderating content that can harm users. In response, Meta has developed advanced technology to identify ads for AI nudify services, even when the ads do not contain nudity. This technology uses matching algorithms to detect and remove copycat ads more efficiently. Since the beginning of 2025, Meta has disrupted four separate networks promoting AI nudify services. Additionally, the company has expanded its list of flagged terms, phrases, and emojis to help catch these ads. To further combat the issue, Meta is participating in the Tech Coalition’s Lantern program, an initiative involving major tech companies aimed at preventing child sexual exploitation online. Through this program, Meta has shared over 3,800 unique URLs related to AI nudify apps since March. On the legislative front, Meta supports laws that empower parents to oversee and approve their teens' app downloads. The company backs the US Take It Down Act and is working with lawmakers to facilitate its implementation. These efforts reflect Meta's broader strategy to address the harmful use of generative AI on its platforms and in the broader digital landscape. The growing problem of AI nudify apps is part of a larger trend where social media companies are struggling to balance innovation and user safety. As generative AI becomes more accessible, it poses significant risks, particularly to women and minors. Meta's legal and technological responses are steps toward mitigating these risks, but the challenge remains complex due to the evolving nature of AI and the creativity of bad actors. Industry insiders commend Meta's proactive approach but acknowledge that the battle against AI misuse is an ongoing challenge. Dr. Sarah Thompson, a digital ethics expert at Stanford University, noted, "While Meta's efforts are commendable, the issue of AI-generated explicit content is a global problem requiring collaborative solutions from multiple stakeholders, including other tech companies, governments, and civil society organizations." She added that the development of robust regulatory frameworks and continuous improvements in AI detection technology are crucial. Meta, founded in 2004 as Facebook, is a leading tech company in the social media sector. Its platforms have billions of users worldwide, making the company's policies and actions highly influential. The company is known for its commitment to innovation, including recent ventures into virtual reality and AI, but these advancements also present new ethical and safety challenges. The current lawsuit and technological measures are part of Meta’s broader effort to maintain a safe and responsible digital environment.