HyperAIHyperAI

Command Palette

Search for a command to run...

Black-Market AI Training Accounts Flood Social Media as Scammers Exploit Data Labeling Boom

A growing black market is emerging around AI training accounts, with dozens of illicit listings found on Facebook, WhatsApp, and Telegram offering access to verified accounts used by contractors for major data-labeling platforms like Scale AI, Surge AI, Mercor, and Handshake. These accounts are essential for workers to perform tasks such as evaluating AI chatbot responses, labeling images, and improving model accuracy for big tech clients. Business Insider’s investigation uncovered at least 100 groups on Facebook selling or promoting access to these accounts, with Meta removing about 40 of them after the story was reported. The company confirmed it is continuing to investigate the issue, as these sales violate its policies on fraud and scams. The platforms themselves strictly prohibit account reselling and have implemented safeguards, but the underground trade persists. The demand for these accounts stems from the high value of work on platforms like Outlier, operated by Scale AI, and DataAnnotation.tech, run by Surge AI. These jobs are often remote, pay up to $100 an hour, and are in high demand in regions with lower pay rates. When projects end or access is restricted by geography, contractors lose income — creating a window for opportunists. Some individuals, including former contractors in Kenya, reported that they know people who bought accounts to bypass restrictions, especially since many projects are only available in countries like the U.S. or Canada. Buyers use tools like VPNs or "shadow proxies" to mask their real location. Scammers also offer guides on YouTube and Telegram to help users bypass screening tests or geo-blocks. The sale of accounts is risky for both parties. Buyers risk losing money to fraudsters who disappear after payment, or receive fake login details. Sellers face the risk of being caught, which could lead to account bans, tax liabilities, or being held responsible for work done under their credentials. Some sellers are even targeted by fake job offers that ask for login information, a form of phishing. Internal documents from Scale AI show the company has been battling fraud for years. A 2023 spreadsheet revealed 490 contractors were removed for reasons including using VPNs, having multiple accounts, or copying and pasting content. Another document from 2024 listed thousands of users flagged as "suspected spammers" on a project for Google. Scale AI has also blocked users from countries like Egypt, Kenya, and Pakistan from certain projects to prevent cheating. Prolific, a UK-based data-labeling platform, said the fraud ecosystem is becoming increasingly sophisticated, resembling patterns seen in bank fraud and ticket scalping. Sara Saab, Prolific’s vice president of product, called it an "accelerating arms race" between companies and fraudsters, requiring constant innovation in detection and prevention. Despite these efforts, the black market for AI training accounts continues to grow, fueled by the high stakes of AI development and the global demand for low-cost, high-quality data. As AI companies raise billions to scale their operations, the challenge of securing authentic, reliable data remains a major concern.

Related Links