HyperAIHyperAI

Command Palette

Search for a command to run...

TikTok Fires Hundreds of Moderators, Shifts to AI-Driven Content Safety Amid Regulatory Scrutiny

TikTok has initiated a significant shift in its content moderation strategy by laying off hundreds of moderators in the UK and Asia, marking a major step toward greater reliance on artificial intelligence across its operations. The move, which affects a portion of its 2,500-person UK workforce, was confirmed by the Wall Street Journal, though the company did not disclose the exact number of employees let go. TikTok stated that affected workers will be given priority in future hiring if they meet unspecified criteria. The decision has sparked immediate backlash from labor unions and online safety advocates. John Chadfield, national tech officer for the Communications Workers Union (CWU), criticized the company, saying, “TikTok is putting corporate greed over the safety of workers and the public.” He added that employees have long warned about the real-world consequences of replacing human moderators with unproven AI systems, which may lack the nuance and judgment required to handle complex content. In response, TikTok defended the transition, stating that it has been developing and deploying AI tools for several years as part of a broader reorganization of its global Trust and Safety operations. The company said the shift aims to streamline its operations by consolidating teams into fewer locations worldwide and improving efficiency. It emphasized that AI is being used to enhance both user safety and the well-being of human moderators by reducing exposure to harmful content. TikTok claims its AI systems already remove about 85% of non-compliant content automatically, though it did not provide evidence to support this figure. The company also pointed to new regulatory pressures in the UK, where the Online Safety Act, which took effect in July, imposes penalties of up to 10% of global revenue for non-compliance. The UK’s Information Commissioner’s Office has already launched a probe into how TikTok collects and uses data from users aged 13 to 17. Despite these challenges, TikTok maintains that AI is essential to meeting the demands of evolving safety standards. The company insists its systems are designed to maximize both speed and accuracy in content review. However, critics remain concerned that AI may not yet be capable of handling the full complexity of harmful or borderline content, particularly in sensitive contexts involving minors or vulnerable users.

Related Links

TikTok Fires Hundreds of Moderators, Shifts to AI-Driven Content Safety Amid Regulatory Scrutiny | Trending Stories | HyperAI