Science Fiction and Comic-Con Push Back Against AI: Creators Ban AI-Generated Content in Awards and Art Shows
In recent months, prominent voices in science fiction and popular culture have taken strong stances against the use of generative AI in creative work. Two major developments—the decisions by the Science Fiction and Fantasy Writers Association (SFWA) and San Diego Comic-Con—highlight growing resistance within creative communities, even as AI tools become increasingly embedded in everyday technology. In December, SFWA announced updates to its Nebula Awards rules, initially stating that works created entirely by large language models (LLMs) would be ineligible for consideration. It also required authors who used LLMs at any stage of writing to disclose that use, allowing voters to decide whether it affected their judgment. This approach quickly sparked backlash, with many members arguing it created ambiguity and risked undermining the integrity of the awards. The SFWA Board of Directors issued a public apology, acknowledging that the wording had caused distress and distrust, and admitted the original policy was flawed. The rules were revised again, now stating clearly that any work—whether wholly or partially—created using generative LLMs is not eligible for the Nebula Awards. If LLMs are used at any point during creation, the work will be disqualified. In a follow-up post, writer Jason Sanford, who covers the genre for his Genre Grapevine newsletter, expressed relief that SFWA listened to its members. He emphasized his personal refusal to use generative AI in his fiction, citing both ethical concerns over the theft of creative work and the tools’ lack of true creativity. He also warned that the definition of AI use must be carefully considered, noting that many modern tools—like word processors, search engines, and research platforms—already incorporate LLMs. “If you use any online search engine or computer product these days, it’s likely you’re using something powered by or connected with an LLM,” he wrote. “We must be careful not to penalize writers who use tools with AI components simply because of corporate overreach.” Meanwhile, San Diego Comic-Con faced a similar controversy this month when artists noticed that the convention’s art show rules permitted AI-generated art to be displayed, though not sold. After widespread criticism from artists, the rules were quietly revised to state: “Material created by Artificial Intelligence (AI), either partially or wholly, is not allowed in the art show.” While Comic-Con did not issue a public apology, some artists shared internal messages from Glen Wooten, head of the art show, who explained that the original rule had been in place for years and had acted as a deterrent—no AI art had been submitted. However, he acknowledged the issue was becoming more pressing, saying, “The issue is becoming more of a problem, so more strident language is necessary: NO! Plain and simple.” These moves reflect a broader trend. As major corporations push AI tools into mainstream use, creative communities are pushing back to protect artistic integrity and fair labor practices. Other platforms, like Bandcamp, have also banned generative AI content. With more organizations expected to follow suit, debates over AI’s role in creativity are likely to intensify in the coming year.
