HyperAI
Back to Headlines

Researchers Unveil Method to Bypass AI Art Protection Tools, Highlighting Ongoing Risks for Creators

19 hours ago

A team of international researchers has exposed significant vulnerabilities in Glaze and NightShade, two of the most widely used tools by digital artists to protect their work from unauthorized use by generative AI models. Together, these tools have been downloaded nearly nine million times, primarily by creators seeking to prevent AI models from copying their unique styles without permission. Glaze and NightShade use different techniques to add subtle, invisible distortions—known as poisoning perturbations—to digital images. These distortions are intended to confuse AI models during training, making it difficult for them to accurately replicate an artist's style. Glaze employs a passive approach by hindering the extraction of key stylistic features, while NightShade takes a more aggressive stance by corrupting the learning process, causing the AI model to incorrectly associate the artist's style with unrelated concepts. However, the research team from the University of Cambridge, Technical University Darmstadt, and the University of Texas at San Antonio has developed a method called LightShed that can detect, reverse-engineer, and remove these distortions. LightShed operates through a three-step process: it identifies whether an image has been altered with known poisoning techniques, learns the characteristics of the perturbations using publicly available poisoned examples, and then eliminates the poison to restore the image to its original state. Experimental evaluations showed that LightShed could detect NightShade-protected images with 99.98% accuracy and successfully remove the embedded protections, rendering the images usable for AI training once again. This finding underscores the continued vulnerability of digital artists despite the use of protection tools, raising concerns about the effectiveness of current methods. First author Hanna Foerster, from the University of Cambridge’s Department of Computer Science and Technology, stated, "This shows that even when using tools like NightShade, artists are still at risk of their work being used for training AI models without their consent." Co-author Professor Ahmad-Reza Sadeghi from TU Darmstadt emphasized that the development of LightShed is meant to spur advancements in art protection, rather than criticize existing tools. "Our goal is to collaborate with other scientists in this field and support the artistic community in developing tools that can withstand advanced adversaries," he said. The issue of AI and digital creativity is complex and rapidly evolving. In March, OpenAI introduced a ChatGPT image model capable of generating artwork in the style of Studio Ghibli, sparking debates about image copyright. Legal experts pointed out that copyright law typically protects specific expressions, not general artistic styles, limiting the avenues for artists to challenge unauthorized usage. OpenAI responded to these concerns by implementing prompt safeguards to block certain user requests aimed at generating images in the styles of living artists. However, ongoing legal battles highlight the pervasive nature of the problem. For instance, Getty Images is suing Stability AI, claiming the company trained its image generation model on Getty’s vast archive of copyrighted photos. Similarly, Disney and Universal have filed a lawsuit against Midjourney, accusing the firm of creating a "bottomless pit of plagiarism" with its image generator. Foerster and her team hope their research will contribute to a more robust framework for artist protection. "We must let creatives know that they are still at risk and collaborate with others to develop better art protection tools in the future," she concluded. The paper, titled 'LightShed: Defeating Perturbation-based Image Copyright Protections,' will be presented at the 34th USENIX Security Symposium in August. Industry insiders and legal experts agree that the current state of AI art protection is far from ideal. They argue that while tools like Glaze and NightShade represent a step in the right direction, they are not sufficient to address the growing threat to artists’ intellectual property. There is a pressing need for collaborative efforts among technologists, legal scholars, and artists to develop more sophisticated and effective methods to protect creative works. The stakes are high, as AI continues to advance and the legal landscape remains unclear.

Related Links