Google's AI headlines mislead users with false, clickbait content, replacing real news titles without fact-checking, sparking backlash from publishers and readers alike.
In early December, The Verge revealed that Google had begun replacing real news headlines with AI-generated clickbait in its Google Discover feed. At the time, it seemed Google might be backing away from the experiment. But now, the company insists the feature is here to stay, calling it a success that “performs well for user satisfaction.” I continue to see misleading and often outright false headlines every time I check my phone. These AI-generated headlines are like a bookstore replacing book covers with fake ones — only in this case, the “bookstore” is the news feed on Samsung Galaxy and Google Pixel phones, and the “cover” is an AI-generated lie instead of the real story. For example, Google recently presented a headline claiming “US reverses foreign drone ban,” linking to a PCMag article that explicitly states this is false. PCMag’s own writer, Jim Fisher, called the AI’s version “icky” and urged readers to read the original story rather than trust what Google is feeding them. Google claims it isn’t rewriting headlines — it says these are “trending topics.” But each one appears as a news story, uses the original publication’s image, and links directly to the article, all without proper fact-checking. This creates a deceptive experience where users believe they’re seeing the real headline, but instead are being served an AI summary that’s often inaccurate or misleading. There has been some improvement since the worst of the rollout. Google has moved away from the absurdly short, nonsensical headlines like “Microsoft developers using AI” or “AI tag debate heats.” Now, headlines are longer, though they’re still often inane — such as “Fares: Need AAA & AA Games” or “Dispatch sold millions; few avoided romance.” Worse, Google still fails to distinguish between real news and hype, and frequently confuses one story with another. On December 26, Google claimed “Steam Machine price & HDMI details emerge” — a claim that wasn’t true. On January 11, it announced “ASUS ROG Ally X arrives,” but the device had already launched in 2024, and the new Xbox Ally had arrived months earlier. On January 20, it proclaimed “Glasses-free 3D tech wows,” linking to a TechRadar story about Visual Semiconductor, while the actual innovation was from a different company, Leia. Another example falsely linked a GPU maker’s comment on RAM shortages to a Digitimes article about a RAM manufacturer. These errors are not isolated. I’ve seen Google mix up stories, misrepresent facts, and serve headlines that are clearly not what the original article says. The worst part? Google still allows human-generated clickbait to remain untouched — like a Screen Rant headline claiming “Star Wars Outlaws Free Download Available For Less Than 24 Hours.” In reality, only one copy was given away to a UK resident. Google’s AI didn’t correct or question it — it just served it as is. Google’s spokesperson Jennifer Kutz said the feature is intended to help users discover news, but offered no real explanation for how it ensures accuracy. I’ve also seen these AI headlines appear as push notifications, leading to a Gemini chatbot summary instead of the original article. This isn’t just about bad headlines — it’s about trust. Google is positioning itself as the gatekeeper of news, but it’s doing so without accountability. It’s not filtering out the worst human clickbait, and it’s not fixing its own AI errors. Meanwhile, publishers like The Verge are losing control over how their work is presented — and how readers discover it. I urge people to go directly to the source. Don’t rely on Google to tell you what’s important. And if you’re frustrated by this, know that Vox Media, The Verge’s parent company, has filed a lawsuit against Google over its alleged ad tech monopoly.
