Social-Media

YouTube brings more human moderators back to AI overcensor systems

YouTube claims it brings back human moderators who were "put offline" during the pandemic because the company's AI filters failed to match their accuracy.
 
Back in March, YouTube said it would rely more on machine learning systems to flag and delete content that violated its policies on such things as hate speech and misinformation. But this week, YouTube told the Financial Times that greater use of AI moderation had contributed to a large rise in video removals and incorrect take-offs.
 
Approximately 11 million videos were deleted from YouTube between April and June, says the FT, or around twice the normal amount. About 320,000 of these takeovers have been challenged, and half of the videos challenged have been restored. Again, the FT says that this is approximately twice the average figure: an indication that the AI systems were overzealous in their attempts to detect harmful material.
 
As YouTube's Chief Product Officer, Neal Mohan, told FT, One of the decisions we made [at the beginning of the pandemic] when it came to machines that couldn't be as reliable as humans, we were going to err on the side of making sure that our users were safe, even though that could have resulted in [a] slightly higher number of videos coming down.
 
This admission of failure is extraordinary. All major online social networks, from Twitter to Facebook to YouTube, have steadily been under pressure to deal with the spread of abusive and misleading material on their pages. And they all said that algorithmic and automatic filters will help cope with the large size of their platforms.
 
Time and time again, however, experts in AI and moderation have expressed doubts about these arguments. Judging whether a video about, say, conspiracy theories includes implicit indications of racist ideas can be a human challenge, they say, and machines lack our ability to grasp the exact cultural meaning and complexity of these statements.
 
Automated systems can identify the most obvious criminals, which is certainly useful, but humans are still needed for finer judgment calls.
 
Even with clearer choices, computers can still screw up. For example , back in May, YouTube admitted that comments containing some critical phrases of the Chinese Communist Party ( CCP) were automatically removed. Later, the organization blamed the error with our compliance processes for the mistakes.
 
But as Mohan told the FT, machine learning systems certainly have their place, even if it's only to delete the most obvious offenders. Over 50 percent of those 11 million videos have been deleted without a single view by the actual YouTube user, and over 80 percent has been disabled with fewer than 10 views, he said. And that's the strength of the machines.

 






Follow Us


Scroll to Top