wordpress blog stats
Connect with us

Hi, what are you looking for?

Facebook, Microsoft announce challenge to detect deepfakes

Deepfake
Courtesy: Facebook

On September 5, Facebook and Microsoft announced the Deepfake Detection Challenge (DFDC) to produce technology that can be used to detect an “deepfake” videos, that is, AI-generated videos of real people and events that have been altered to mislead.

When will the challenge be launched? In December 2019 at the Conference on Neural Information Processing Systems (NeurIPS) in Vancouver, Canada, but events will begin in October 2019.

Who built this challenge? Facebook, Microsoft, the Partnership on AI, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY

What does the challenge include? Facebook is contributing $10 million to the effort. The challenge includes:

  1. Data set of videos and deepfakes that can be used by the community to develop technology around it will be made available to participants. Facebook has clarified that it is working with a third-party vendor to create a new data set of videos using “paid actors, with the required consent obtained”. Facebook will then create “tampered videos” on a subset of these videos using AI. No Facebook user data will be used in this data set.
  2. Funding for research collaborations and prizes to encourage participation

How will the data set and challenge parameters be tested?

  1. Targeted technical working session in October 2019 at the International Conference on Computer Vision (ICCV)
  2. Full data set will be released in December 2019 at NeurIPS

Who will run it? The Partnership on AI’s new Steering Committee on AI and Media Integrity, which is a cross-sector coalition of organisation including Facebook, Microsoft, WITNESS, etc.

Why now? Deepfake videos are increasingly becoming mainstream and the underlying technology is becoming more sophisticated. While initially this technology seemed to be the domain of only technologists, the popularity and easy availability of Chinese app Zao, which uses only image user to render a deepfake, has proven that such technology is no longer exclusive. It is now freely available to lay users as well. As fake news and mis/disinformation campaign become more rampant across the world, deepfake videos exacerbate the problem of authenticating information online. The need to combat nefarious implementation of such AI has intensified.

Advertisement. Scroll to continue reading.

Other significant deepfakes:

  • In June 2019, a deepfake video of Mark Zuckerberg was uploaded to Instagram wherein a 2017 video of Zuckerberg describing Russian interference on Facebook was altered using AI to give a sinister speech on Facebook’s power. A few days before that, US House Speaker Nancy Pelosi’s video was edited to make her seem drunk. The latter wasn’t a deepfake, as AI wasn’t used to alter the video.
  • In December 2017, one Redditor created deepfake porn videos of famous actresses using open-source machine learning tools like TensorFlow.

You May Also Like

News

Facebook’s Oversight Board on Wednesday upheld the social media company’s decision to suspend former US President Donald Trump from its platform. The Board added,...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ