HyperAIHyperAI

Command Palette

Search for a command to run...

From glitchy to lifelike: AI videos of Will Smith eating spaghetti reveal rapid progress in generative video tech

In just two and a half years, AI video generation has evolved dramatically—nowhere more evident than in the unofficial benchmark test of showing Will Smith eating spaghetti. What began as a humorous litmus test for AI’s realism has become a striking illustration of how far the technology has come. The challenge originated in 2023 when a Reddit user shared a video generated by ModelScope, a text-to-video AI model. The result was jarring: Will Smith appeared as a distorted, cartoonish figure with exaggerated features, awkward movements, and a complete failure to actually eat the spaghetti. In some clips, the noodles never even moved toward his mouth. The video highlighted early AI limitations—glitches like extra fingers, misshapen limbs, and unnatural motion—that made the output feel more like a glitchy animation than a real person. Smith himself humorously acknowledged the test in February 2024, posting a TikTok of himself eating spaghetti in a similarly exaggerated, almost animated style—underscoring how far AI still had to go. Since then, progress has been rapid. In 2024, MiniMax, a Chinese AI company, produced a more accurate version of Smith eating spaghetti. While the likeness improved, the chewing motion remained unnatural, and the noodles appeared to float mid-air at the end—another telltale sign of AI’s lingering physical inconsistencies. By May, a user on X (formerly Twitter) posted a clip generated using Google’s Veo 3. The video looked more lifelike, but the spaghetti made an unnaturally loud, crunchy sound with every bite—highlighting how audio and visual elements still didn’t align perfectly. A later version, Veo 3.1, showed further refinement, with smoother motion and more natural lighting. But the real leap came with OpenAI’s Sora, widely considered the most advanced AI video generator to date. Sora 2, launched in September alongside a TikTok-style mobile app, produced videos so realistic that they sparked immediate concern. The app’s core feature, called “cameos,” allows users to generate videos of real people by uploading facial scans—raising major legal and ethical questions. In response, rights holders have pushed back. Just before Sora 2’s release, Disney, Universal, Warner Bros., and others filed a federal lawsuit against MiniMax, accusing it of copyright infringement. Meanwhile, Cameo, the personalized video platform, sued OpenAI over the use of the word “cameo” for the app’s feature, arguing it caused consumer confusion and brand dilution. A federal judge temporarily blocked OpenAI from using the term. The controversy extends beyond celebrity likenesses. Lawmakers in Washington are increasingly alarmed by AI’s ability to generate videos of public figures saying things they never actually said—raising fears about misinformation and deepfake abuse. Despite the backlash, major brands continue to embrace the technology. Coca-Cola recently confirmed it used AI—including Sora, Veo 3, and Luma AI—to help create its latest holiday advertisement, signaling that high-quality AI video is now a viable tool in mainstream marketing. What began as a quirky test of AI’s ability to simulate a simple meal has become a powerful indicator of how advanced—and complex—the technology has become.

Related Links