OpenAI Restores Legacy Models After GPT-5 Rollout Sparks User Backlash
OpenAI has reversed course after a disastrous rollout of its latest AI model, GPT-5, which sparked widespread backlash from its most loyal users. The launch, intended as a major milestone, instead triggered a user mutiny that forced the company to quickly backtrack. On August 7, OpenAI introduced GPT-5 as a “unified system” that automatically routes queries to the most appropriate AI model, eliminating the user choice between different models like GPT-4o. For free users and subscribers, this meant losing access to the older, trusted models they had relied on for years. The change was especially jarring for paying customers on the ChatGPT Plus plan ($20/month), who had built workflows around specific models—some for creativity, others for logical reasoning or research. Removing that control disrupted their productivity and undermined their ability to verify AI outputs by cross-checking answers. The backlash was immediate and fierce, with users flooding social media, launching petitions, and canceling subscriptions. Online forums like Reddit erupted in outrage, with users expressing disbelief and frustration over being forced to use what many called a “trash model.” In response, CEO Sam Altman took to X (formerly Twitter) over the weekend to acknowledge the misstep. In a series of posts, he admitted that OpenAI had “misjudged the situation” and announced a series of major concessions. The most significant was the return of GPT-4o and other legacy models. “It’s back! go to settings and pick ‘show legacy models’,” Altman confirmed, restoring user choice. While GPT-5 remains the default, users can now manually select older models, a move widely welcomed. To further appease paying customers, Altman announced a massive increase in usage limits for GPT-5’s most advanced features. He revealed that Plus users would now get 3,000 reasoning requests per week—up from previous caps—making the subscription significantly more valuable. When asked about the new limits, Altman joked, “trying 3000 per week now!”—a lighthearted but telling acknowledgment of the demand. Altman also promised greater transparency, vowing to update the user interface to clearly indicate which model is active during a conversation and to publish a detailed blog post explaining OpenAI’s strategy for managing AI capacity and tradeoffs. He shared data showing that usage of reasoning models has surged—jumping from 7% to 24% among Plus users—justifying the company’s focus on advanced capabilities, even if the rollout was poorly timed. The incident marked a rare and powerful moment of user-driven change in tech. Despite OpenAI’s position as a dominant AI player, the collective outcry forced a reversal, demonstrating the strength of a loyal user base. The compromise—keeping GPT-5 as the default while restoring access to legacy models—reflects a balance between innovation and user autonomy. Ultimately, the episode underscores a critical lesson: even the most advanced AI systems must be designed with user needs and trust in mind. OpenAI’s swift pivot shows that no company, no matter how powerful, can afford to ignore its users. The backlash wasn’t just about a model—it was about control, choice, and respect. And in this case, OpenAI learned that the best AI isn’t just the most powerful—it’s the one that listens.