HyperAI

YouTube Replaces Human Reviewers With AI, Then Decides to Hire Them Back

5 years ago
Big factory news
神经小兮
特色图像

YouTube, Facebook, Twitter and other social platforms have been making great efforts to conduct content review, especially using machine learning technology to gradually replace manual work. However, YouTube recently decided to temporarily stop this process and rehire the manual reviewers who were fired at the beginning of the year.

At the beginning of this year, YouTube laid off a large number of its manual content review teams and replaced them all with AI to conduct content review.

Therefore, in the second quarter of this year, from April to June, all videos on YouTube were officially reviewed by AI. This is also the first time in YouTube's history that no human reviewers participated in the initial review of content for an entire quarter.

This big step forward has brought about significant consequences. YouTube recently halted this initiative and rehired a large number of manual review teams that were laid off at the beginning of the year. 

AI replaces manual work: record number of videos removed

One of the important reasons why AI review was stopped is that the rules of AI review are stricter than those of manual review, and may even lead to accidental injuries.

According to Google in its regular transparency reports:In 2020, when AI took over content review from April to June, a total of 11 million videos were marked as illegal by AI and removed from the platform., these videos mainly come from the United States, India and Brazil.

but,In the first quarter of this year, only 6.6 million videos were identified as violating regulations and removed from the shelves.After switching from manual to AI, the number of videos taken down almost doubled.

Videos removed due to AI review account for 95% of all videos removed (data source: Google's official transparency report)

Among the 11 million videos removed by AI, most were pornographic, fake, or involved in terrorist crimes, but there were also many cases of accidental injuries.

in,Appeals were filed against 320,000 videos that were marked and removed by AI, and nearly half of them passed the second review and were put back online.In the past, only about 25% of the videos that were appealed could be put back online.

YouTube's review AI, on its own, has made YouTube broadcasters tremble with fear, forcing them to spend more time self-checking their content. For a time, the complaints were overwhelming and gave YouTube a headache. The large number of appeals also increased the workload of the YouTube team for re-review.

Artificial intelligence takes over again: the time is not yet ripe, and AI still needs to work hard

Before YouTube cut its manual review team at the beginning of the year, there were as many as 10,000 full-time or outsourced review positions.

These human reviewers are located in Google's global offices or in the offices of outsourced vendors.

  10,000-person team: huge operating costs 

The job requirements for manual reviewers are not low. For example, if you are responsible for content review in the Middle East, you need to master Arabic and Arabic dialect, and review videos uploaded from the Middle East in order to promptly mark videos involving terrorist, violent, and inciteful content.

This kind of work requires reviewing more than 500 hours of video per week. The salary for entry-level positions can reach $18.5 per hour, and the annual salary can reach around $37,000.

To be a senior content reviewer, you need to have some legal knowledge. Not only can you get the treatment of a full-time Google employee, but you can also get an annual salary of nearly $100,000.

From young outsourced workers in the Philippines to risk control operations in the Bay Area, all play an important role in content review

Behind the glory, Google is also bearing considerable costs, which also led to the elimination of manual review positions on YouTube during this year's epidemic.

YouTube also mentioned this rehire:The epidemic has increased the cost and difficulty of manual review. At the beginning, we had no choice but to choose AI over insufficient manual intervention and excessive AI intervention.

  Manual review: marginal profession is physically and mentally exhausting 

Content review is not an easy job. Not only do you have fixed work targets every week, but you also have to face images of child pornography, violence, and horror for a long time, which can often make people physically and mentally exhausted.

Mainstream American media such as The Washington Post and Fortune Magazine have made in-depth reports on the profession of content reviewers.Without exception, all content reviewers interviewed expressed that this profession had a strong negative impact on their physical and mental health.

Fortune Magazine used an auditor wearing a gas mask to illustrate the psychological trauma that auditing brings to people

The influence of media opinion has also urged technology companies to make a series of improvements to such positions, such as limiting the weekly time limit for watching inappropriate videos and providing regular psychological treatment.

Artificial or AI: Don’t rush to draw a conclusion

The cost of manual review, the harm to people's psychology, and the technological advances in machine learning have all driven the research and rapid application of automated content review on various platforms in recent years.

Although AI has a certain chance of accidental mistakes, it has indeed demonstrated greater efficiency than manual review in the review of illegal content.

YouTube Chief Product Officer Neal Mohan mentioned a set of data when responding to the Financial Times about the return of manual review “Among the 11 million videos marked for deletion by AI, videos with more than 50% were removed before any user played them; videos with more than 80% were removed within 10 plays.”

As a pioneer in replacing human labor with AI on a large scale, YouTube is bound to make mistakes, but at the same time, it has also pointed out a clearer development direction for the collaborative relationship between humans and AI for other companies and even the entire society.

Although such experiments are troublesome, they are necessary. This requires companies to have the spirit of taking the lead, the courage to bear the criticism from the public, and the determination to deal with the aftermath quickly. Similarly, open and transparent data will also allow the media and society to have more consensus and confidence in accepting the uncertainty of the future.

Who knows if I, who writes this article, am an AI?

References:

Fortune:"Why Thousands of Human Moderators Won't Fix Toxic Content on Social Media"

The Verge:"YouTube brings back more human moderators after AI systems over-censor", "THE TERROR QUEUE"

The Washington Post:"Content moderators at YouTube, Facebook and Twitter see the worst of the web — and suffer silently"

Wired:"As humans go home, Facebook and YouTube face a coronavirus crisis"

-- over--