OpenAI’s Sora Sparks Alarm: AI Video Tool Raises Concerns Over Attention, Ethics, and the Future of Digital Reality
Your Life Has Just Changed. For the Worse. We were promised AI superintelligence—revolutionary tools to solve humanity’s greatest challenges, from curing diseases to combating climate change. Instead, we’re getting something far more insidious: an attention-sapping, dopamine-driven, AI-powered social media app that’s designed to keep you scrolling, clicking, and losing your focus. The tool is undeniably impressive—so much so that it’s terrifying. It leaves you speechless, not with awe at its potential, but with dread at what it might become. OpenAI, the company that once positioned itself as a force for good, now appears to be stepping into the realm of digital overstimulation—exactly the kind of environment that erodes our ability to think deeply, focus, or even speak coherently. This isn’t a joke. It’s the beginning of what I’m calling the AI enshittification era—the moment AI tools stop being helpful and start being harmful, not by accident, but by design. And today, it feels like we’ve crossed that line. You might ask: why should I care? Because this isn’t just about a new app. It’s about how AI is being weaponized—not against enemies, but against our minds. It’s about how the very tools meant to elevate us are being used to exploit our psychology, to hijack our attention, and to condition us into endless consumption. OpenAI’s Sora App Yesterday, OpenAI—famed for launching ChatGPT—revealed Sora, a new AI tool capable of generating highly realistic videos from text prompts. The demo was stunning: a golden retriever playing in a meadow, a futuristic cityscape at night, a child’s birthday party with lifelike motion and lighting. The quality was so high it blurred the line between real and artificial. On the surface, this sounds like progress. But the real story isn’t about what Sora can do—it’s about what it might become. Imagine a world where anyone can generate viral, emotionally charged, hyper-realistic videos in seconds. Where misinformation spreads faster than truth. Where children are exposed to fabricated experiences that feel as real as memory. And now, imagine this tool being embedded in social media platforms—designed not to inform, but to addict. To keep you watching, reacting, sharing. To trigger dopamine hits with every scroll. That’s not innovation. That’s exploitation. This isn’t the fault of AI itself. AI is a tool—neutral, powerful, and capable of anything. The danger lies not in the technology, but in how it’s used. We should not blame AI. We should blame the people who choose to use it to manipulate, distract, and degrade human attention and well-being. We’ve seen this before—social media platforms optimized for engagement, not truth. Now, AI is being used to amplify that same cycle, but at a speed and scale we’ve never seen. So what should you do? First, be aware. Recognize when a tool is designed to capture your attention, not serve your mind. Second, question the intent behind every AI product. Who benefits? Who loses? Third, demand better. Push back against the idea that more engagement equals progress. We don’t need AI to make us dumber. We need it to make us sharper, kinder, more thoughtful. Today marks a turning point. The era of AI enshittification has begun. The question isn’t whether AI will change us—it already has. The real question is: do we want to be changed for the worse? The answer lies not in fear, but in clarity. And that’s something no AI can give you. Only you can choose.