Social-Media

Facebook rivalry shows that deep detection continues to be a 'unresolved issue'

In an open competition to find algorithms that can spot AI-manipulated videos Facebook have announced the results of its first Deepfake Detection Challenge. The findings reveal, while encouraging, that a lot of work remains to be done until automated systems can reliably detect deep content with researchers stating that the issue is "unresolved problem."
 
Facebook claims that "expectable real world cases" of deepfakes could be identified with an average precision of 65.18% in the competition's winning algorithm. It is not bad, but for any automated system it is not the type of hit rate that you want.
 
Deepfakes have shown that their threat to social media is an unfounded one. Although the technology has caused a great deal of effort to erode reliable video evidence, deepfakes have had little political impact to date. Instead, non-consensual pornography has become more instantly dangerous, and is easier to recognize and remove from social media sites.
 
In a press release, Mike Schroepfer, Facebook's chief technology officer, said he was satisfied with the results of the competition that would set a benchmark for investigators and direct their research for the future.
 
"There was more success than I could ever have wished for, honestly the rivalry," he said.
More than 35,000 detection algorithms were proposed to the competition by some 2.114 people.
 
Our ability to classify deep videos from a dataset of some 100,000 short clips has been checked. More than 3,000 actors have been employed by Facebook to produce these clips which have been captured in naturalist conversations. Several images have been updated with AI by pasting the faces of other actors into the videos.
 
 
Researchers had access to these data for their algorithms and achieved accuracy levels of 82.56 percent when tested on this content.
 
Furthermore, when they checked the same algorithms against a 'black box' dataset of unknown images, it performed slightly worse, with the best performance model achieving a 65.18 percent accuracy rate. This shows that detecting wild deep defects is very difficult.
 
Schroepfer said Facebook is creating its own, independent, deep detection technology. "In development, we have deep detection technology and we'll develop it on this basis," he said.
 
The company announced it would ban deepfakes earlier this year, but critics have pointed out that so-called "shallowfakes" pose a far greater threat to disinformation, with videos edited on the traditional basis.
 
The winning algorithms to help other researchers will be released as Open Source Code, but Facebook said it would keep its own detection technology secret to prevent reverse engineering.
 
Schroepfer added that while deepfakes for Facebook were "currently not a big issue," the company wanted the tools to find this content in future — if only.
 
Some experts have said that the forthcoming elections in 2020 can be a prime time for profound influence.
 
"I want to be ready and not be taken flat, the lesson I learned the hard way in the last few years," Schroepfer said. "I want to be prepared for several bad things that don't happen in the opposite direction."

 






Follow Us


Scroll to Top