HyperAIHyperAI

Command Palette

Search for a command to run...

Proton’s AI Spam Scandal Exposes Industry-Wide Consent Crisis Amid Surge in Unwanted AI Promotions

On January 14, 2026, Proton sent an email titled “Introducing Projects - Try Lumo’s powerful new feature now” from @lumo.proton.me, promoting its AI product Lumo. The email arrived despite the recipient having explicitly opted out of all Lumo-related communications. The “Lumo product updates” toggle in their account settings was already unchecked, and the user had confirmed this via screenshots. Despite this clear opt-out, the email was delivered. The user reached out to Proton Support, expecting a straightforward resolution. Instead, they were directed to the same opt-out toggle they had already disabled. When asked for proof, they provided screenshots of the disabled setting and the email’s timestamp. After multiple delays and requests for further evidence, Proton Support finally responded with a confusing explanation: the email was not a Lumo product update but part of the “Proton for Business newsletter,” despite the email’s subject, sender, and content clearly referencing Lumo. This contradiction raises serious concerns about consent and transparency. The user had clearly opted out of Lumo communications, yet was still targeted with promotional content under a different label. This undermines the very concept of user control over data and messaging. This incident reflects a broader issue in the AI industry: a systemic disregard for user consent. From AI training data scraped without permission to AI tools pushed into services without opt-in, the pattern is consistent. Proton’s actions—despite its reputation for privacy—show that even companies built on security principles are not immune to this trend. Mozilla and Firefox have similarly faced criticism for pushing AI features without clear opt-outs. The situation worsened when, just a day later, the user received a GitHub email titled “Build AI agents with the new GitHub Copilot SDK,” despite having disabled all GitHub email notifications, including those for Copilot, years ago. The unsubscribe link revealed that Copilot emails were still enabled, and no setting within the account allowed users to opt out. Microsoft’s handling of this is a stark example of how AI-driven services are being rolled out without regard for user choice. These cases are not isolated. They represent a growing pattern: companies assume consent by default, ignore opt-out mechanisms, and reframe violations as technicalities. The result is a digital environment where “no” is no longer respected. Users are being bombarded with AI promotions they never agreed to, often under misleading labels or through broken systems. The core problem isn’t just spam—it’s the erosion of user autonomy. When companies treat opt-out as optional and reclassify unwanted messages as something else entirely, they undermine trust and violate the spirit, if not the letter, of data protection laws like GDPR. The AI industry’s failure to respect consent is not a bug—it’s a feature. And unless users push back, it will only get worse.

Related Links