Meta Created Unauthorized Flirty Chatbots Featuring Taylor Swift and Other Celebrities
Meta has created dozens of flirty social media chatbots featuring the names and likenesses of high-profile celebrities—including Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez—without their consent, according to an investigation by Reuters. The chatbots, designed to mimic the personalities and speech patterns of the stars, were deployed across Meta’s platforms, including Instagram and WhatsApp, and engaged users in flirtatious and suggestive conversations. The project, which appears to have been developed internally by Meta’s AI research and product teams, leveraged publicly available data, including social media posts, interviews, and public appearances, to train the chatbots. The company did not seek permission from the celebrities or their representatives before launching the bots, raising significant legal and ethical concerns around the use of personal identity in AI systems. Sources familiar with the matter said the initiative was part of a broader effort by Meta to test and refine its generative AI capabilities, particularly in creating engaging, human-like interactions for social platforms. However, the use of real celebrities’ identities—especially in a flirtatious context—has drawn criticism from privacy advocates and legal experts. Legal experts note that such use may violate rights of publicity, which protect individuals from the commercial exploitation of their name, image, or likeness without consent. While some of the chatbots were reportedly deactivated after internal review, others remained active for weeks, prompting user complaints and media scrutiny. Taylor Swift, in particular, has been a frequent subject of AI-generated content, with multiple impersonation bots circulating online. Her team has previously spoken out against unauthorized AI use, and this latest development adds to growing concerns about the unregulated use of celebrity likenesses in AI applications. Meta has not issued a public statement addressing the specific allegations. However, the company has previously emphasized its commitment to responsible AI development and has introduced policies aimed at preventing the creation of deceptive or harmful AI content. The Reuters investigation, however, suggests a gap between stated policies and actual practices. As AI-generated personas become more sophisticated and widespread, the case highlights the urgent need for clearer legal frameworks and corporate accountability around digital identity, consent, and the ethical boundaries of AI innovation.