Education

The AI of Facebook is still largely puzzled by covid misinformation.

News: Facebook's new Community Standards Compliance Report published today outlines the changes it has made to its AI systems to identify hate speech and misinformation. The tech giant reports that 88.8 percent of all hate speech it removed this quarter was reported by AI, up from 80.2 percent in the previous quarter. The AI can automatically remove content if the system has high confidence that it is hate speech, but most of it is still first checked by a human being.
 
Behind the Scenes: The change is largely powered by two changes to Facebook AI systems. First of all, the organization is now using huge natural-language models that can help interpret the context and meaning of a article.
 
Such models build on developments in AI research over the last two years that enable neural networks to be trained in language without any human intervention, and to get rid of the bottleneck created by manual data healing.
 
The second change is that Facebook systems can now evaluate material that consists of photos and text combined, such as hate memes. AI is still constrained in its ability to analyze these mixed-media content, but Facebook has also published a new data set of offensive memes and launched a competition to help crowdsource better algorithms to identify them.
 
Covid lies: Despite these updates, however, AI has not played such a large role in handling coronavirus misinformation surges, such as conspiracy theories about the origin of the virus and fake news about cures. Instead, Facebook relied primarily on human reviewers from more than 60 partner fact-checking organizations. Only once a person has marked something, such as a picture with an inappropriate title, do AI systems scan and automatically add warning labels or discount identical or similar objects. The team has not yet been able to train a machine-learning model to identify new instances of misinformation.
 
"Based on a media appeal Mike Schroepfer, Facebook CTO, said that it takes time and lots of details to create a novel classification for what it knows contents it never saw before.
 
Why this is important: the challenge demonstrates the drawbacks of content moderation based on AI. These systems can detect contents similar to those they have seen before, but if new types of misinformation appear they are founded. Facebook's investment in recent years is notable in designing AI systems that can work better, but the issue isn't just the enterprise: it remains a big obstacle in the field of science.

 






Follow Us


Scroll to Top