Social-Media

YouTube says that China-related deletions were not caused by external parties

The week after YouTube acknowledged unintended elimination of commentary featuring critical phrases of the Chinese Communist Party ( CCP), it fuels intense debate on its censorship policies. The company today told The Verge that the issue was not caused by external intervention – an indication of many 's mistakes.
 
The words "communist bandit" and "50-cent party" were the automatic deletions that resulted, a slang term that internet users paid in defense of the CCP. Some speculated that an external group, possibly linked with the CCP, manipulated YouTube's automated filters by reported these phrases repeatedly and made it offensive to the algorithm.
 
Speaking regarding The Verge, the Alex Joseph spokesperson for Youtube denied that this existed and claimed that YouTube rarely bans comments based on user feedback, contrary to common opinion.
"This was not a result of external intervention and we only remove content when our enforcement system determines it violates our Community Guidelines, not just because users are flagging it," Joseph said. "This has been a mistake with our enforcement systems and we have made a fix."
 
This case is another example of how major internet firms are unable to engage in a worldwide censorship and freedom of expression controversy.
 
When did YouTube end up being the de facto implementor of Chinese internet censorship rules?
 
Although today 's statement from YouTube offers more information than previously, critical questions remain unanswered. How did you enter the system exactly this error? And why have months gone unnoticed? These are not trivial issues, since the lack of a correct explanation in Youtube has allowed politicians to accuse the company of partiality to the CCP.
 
This week, the President of Google, Senator Josh Hawley (R-MO) wrote to the CEO of the Google Group, Sundar Pichai, to ask for answers to the questions "in the face of trouble reporting that your company has resumed its long-lasting pattern of censorial action on behalf of China's Communist Party."
 
THE CONTEXT MISSING
The main question is how exactly were these words classified as inflammatory with very clear anti-communist meaning?
 
YouTube demonstrates by saying that its filters for feedback operate as a three-part method that essentially fits other methods for moderation in the industry. First, users flag content they find offensive or unworthy. This information is then sent to people who accept or reject these claims. This content is sent. Finally, this information is entered into an algorithm for machine learning that uses it to filter comments automatically.
 
Essentially, this system, says YouTube, means that content is always taken into account in its original context. There are no terms which every time they appear are considered offensive and no definitive "banning list" of bad sentences.
 
The goal is to approximate people's skills in language analysis, reading by tone, purpose and context.
 
The context for these terms has been misread in this particular case, says YouTube. It's all right, but uncertain is whether human reviewers or machine filters were responsible for this. YouTube says that this is not a question it can resolve, although it possibly seeks to find out.
 
It is an important question whether humans were responsible for this mistake, as it suggests it is common for administrators to be fooled by users who use inflammatory content – given YouTube claims that this was not the case.
If sufficient CCP friendly users tell YouTube, for instance, how would "communist bandit" react inexorably offensive? What cultural knowledge are they going to need to evaluate? Would they believe or stop considering the broader political picture? What are they told? YouTube does not censor "libtard," for example, although this could be seen as an offensive political insult by some people in the USA.
 
What is strange, in particular, is that one of the words that caused the deletion of 'wu moa' is not even censored in China, a denial of online user payment in defense of the CCP policies. Charlie Smith of the GreatFire, the not-for-profit man who monitors Chinese censorship, said to The Verge that the sentence is actually not so offensive. "We don't usually need protection and we don't need to defend wu mao," Smith says. "They 're wu mao and they're all cutting, pasting and scrolling. No one pays any heed to them.
 
Again, we just don't know what happened, but the explanation from Google doesn't seem to completely rule out the possibility for some kind of coordinated campaign to take part.
 
At least, if necessary, this is more proof that Internet restraint is an unabatedly difficult task which cannot be solved to everyone's satisfaction.
 
CENSORSHIP TRANSPARENCE
This incident may be forgotten, but it highlights a major problem in how technical companies communicate with the public about how platforms obscure or highlight content.
 
It's a tactic that has allowed political accusations, particularly from the right-wing, about censorship, inclination, and shadow banishment, which BiTech has generally not been too explicit about these kinds of systems.
 
This silence is often an intentional strategy, says Sarah T. Roberts, a professor at UCLA who researches content moderation and social media. Tech firms hide how such devices operate, she notes, as they 're always more hastily designed than the company would like to admit. "These processes are seamless and faultless, I think they would like us all to imagine," Roberts says. But, she says, people offer their own interpretations when they don't explain it.
 
If these systems are examined, anything from favored algorithms to large-scale human misery can be revealed.
 
The most obvious example in recent years have been disclosures of the human management of Facebook that have been paying without adequate assistance to determine the most horrible, distressing web material. Facebook eventually caused public outrage and government fines due to a lack of transparency.
 
It is not only a positive reason for transparency, as it does lead to even bigger challenges in the long run. Carwyn Morris, a researcher at the London School, China and Digital Activism, says that lack of transparency creates a general rot on the platforms:
 
It destroys customer trust, allows accumulation of errors and makes it more difficult to slap a relaxation of actual censorship.
 
"To avoid an authoritarian crackdown, I believe content moderation is essential, but it should be transparent," says Morris, "or to make system failures, like this case." He suggests that YouTube could start by simply informing users when their comments are removed because it violates its terms – something the company currently does only for videos. If the company had already done so, this particular mistake could have been noticed soon to save a great deal of trouble.
 
 

 






Follow Us


Scroll to Top