The winning algorithm of Facebook’s Deepfake Detection Challenge was able to spot “challenging real world examples” of deepfakes with an unimpressive average accuracy of 65.18%, the company announced on June 12. In fact, none of the participants in the contest, which included leading experts from around the globe, could achieve an average precision rate of 70% on a private dataset that wasn’t shared with them earlier. Deepfake videos are highly convincing AI-generated videos of real people and events that can potentially be used for disinformation campaigns.

Facebook and Microsoft had announced the Deepfake Detection Challenge (DFDC) in September last year to produce technology that can be used to detect deepfake videos. Amazon Web Services had partnered with them the next month. The competition was hosted by Kaggle, a Google subsidiary, and winners were selected using the log-loss score against a private database.

How Facebook created the deepfake database for the competition: A little over 2,100 participants submitted more than 35,000 deepfake detection models as part of the competition, Facebook said. Their models were initially tested on a database of 100,000 deepfake clips, which was created by Facebook using more than 3,500 consenting paid people. However, the winners were determined on the basis of the code they submitted to a “black box environment”, and not on the basis of their model’s detection capabilities on the public dataset that Facebook had given them. This black box data set consisted of 10,000 videos that were not available to participants in the competition, and had both organic content (both deepfakes and benign clips) found on the internet and new videos created specifically for this project.

Facebook said it focused, in particular, on ensuring diversity in gender, skin tone, ethnicity, age, and other characteristics while building the database. It also altered the videos using a variety of different deepfake generation models, such as image enhancement, and additional augmentations and distractors, such as blur, frame-rate modification, and overlays.

Accuracy rate on an untested database fell significantly compared to a public database: The top-performing model on the public data set achieved 82.56% average precision, which Facebook said is a common accuracy measure for computer vision tasks. However, the top-performing model, developed by Kaggle’s Selim Seferbekov, on the “black box” dataset, had an average precision rate of a little over 65%. Seferbekov’s model had ranked fourth when it was tested on the public dataset.  Similarly, the other winning models, which were second through fifth when tested against the black box environment, also ranked lower on the public leaderboard (they were 37th, 6th, 10th and 17th, respectively). “This outcome reinforces the importance of learning to generalize to unforeseen examples when addressing the challenges of deepfake detection,” Facebook said.

Examples of clips used in the challenge. Clips 1, 4, and 6 are original, unmodified videos. Clips 2, 3, and 5 are deepfakes | Source: Facebook

Facebook is developing its own deepfake detection tech: Mike Schroepfer, Facebook’s chief technology officer saidthe company is currently developing its own deepfake detection technology separate from this competition, according to the Verge. “We have deepfake detection technology in production and we will be improving it based on this context,” he said. Facebook said it will open-source the dataset that was created using the paid actors at the conference on Computer Vision and Pattern Recognition (CVPR), which it claims will help AI researchers develop new generation and detection methods and for use for other research work in AI domains as well as work on deepfakes.

Earlier this year, Facebook had said that it will crack down on deepfake videos. Content that has been edited “in ways that aren’t apparent to an average person and would likely mislead someone” and is created by artificial intelligence or machine learning algorithms, will be removed from the platform, it said. Deepfake videos are becoming more sophisticated with time. In 2018, Indian journalist Rana Ayyub’s face was implanted on a pornographic video which was later shared thousands of times. Deepfakes, when made of public officials, have the potential to change the course of elections and adversely impact public discourse.