Update: Following Facebook’s announcement that it would take down videos that have been edited beyond adjustments for clarity or quality using artificial intelligence/machine learning, MediaNama had asked the platform if they would take down Mark Zuckerberg’s doctored video where he was seemingly made to deliver a speech on power, since the new policy doesn’t apply to parody or satirical videos. Facebook told us that the video will not be removed, but “is eligible for fact-checking, and otherwise subject to our [Facebook’s] Community Standards”.

We had also asked them if the doctored video of House Speaker Nancy Pelosi would be taken down, since it did not involve the use of AI/ML for editing. Facebook said that the video would be “subject to recently enhanced enforcements which include a full blackout overlay warning, dramatically reduced distribution, and notifications to those who try to share it or have shared it in the past”. This suggests that Pelosi’s video, too, will not be taken down.

We had also enquired about Facebook’s detection techniques, and the company declined to comment on it, because “it would advantage bad actors”.

Earlier: Facebook, on January 6, announced that it will crack down on deepfake videos. Content that has been edited “in ways that aren’t apparent to an average person and would likely mislead someone” and is created by artificial intelligence or machine learning algorithms, will be removed from the platform, Facebook’s vice president for global policy management, Monika Bickert said while announcing the new policy. In the announcement post, Facebook hasn’t clarified if this policy will also apply to Instagram.

“Going forward, we will remove misleading manipulated media if it meets the following criteria:

  • It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
  • It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
    — Facebook’s new policy

Does this apply to ads on Facebook? According to Andy Stone, who works at policy communications for Facebook, said that this applies to ads on the platform as well. “With regard to our new policy, whether posted by a politician or anyone else, we do NOT permit in ads manipulated media content (sic),” Stone said on Twitter.

Why this matters: This development should also be seen in light of a hearing titled “Americans at Risk: Manipulation and Deception in the Digital Age”, which is scheduled to take place at US’ House Committee on Energy and Commerce on January 8. Bickert is set to represent Facebook and testify at this hearing, as pointed out by The Verge. This hearing comes a few months before the 2020 US Presidential elections. Given the potential loopholes that the new policy has (discussed below), actual impact of this policy remains suspect.

Problems with Facebook’s new policy

1. Parody or satirical videos will not be taken down: Bickert said that this policy will not apply to content that is parody or satire, or a video that has been edited to remove words or change the order in which they appear. It is not clear if parody or satire, created using AI/ML, would be removed. It is also not clear if the highly convincing deepfake video of Facebook CEO Mark Zuckerberg on Instagram, seemingly delivering a sinister speech about power, will be considered satire or not, and consequently be taken down. At the time, Facebook had said that it would filter the video from Instagram’s recommendation surfaces like Explore and hashtag pages only if third-party fact-checkers mark it as false. We have reached out to Facebook to understand this.

2. What about videos doctored without using AI/ML? Another potential caveat to Facebook’s policy is that it doesn’t account for videos edited using less sophisticated software, or where the content of what is being hasn’t been changed, even though these kind of videos can be detrimental. Case in point being the doctored video of House Speaker Nancy Pelosi, which was, quite convincingly, edited to make her appear in an inebriated state. The edit had merely slowed down the speed of the original video, and Pelosi wasn’t made to say any new words. AI/ML wasn’t used to create the video. We have asked Facebook if Pelosi’s doctored video would be taken in light of the new rules.

  • Drew Hamill, deputy chief of Pelosi’s staff said that, “Facebook wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation,” in response to the new policy.
  • Additionally, Facebook clarified that videos which don’t meet these standards will not be filtered before being reviewed by one of its independent third-party fact checkers.

Read more: Facebook announces steps to protect US 2020 elections; no mention of fact-checking political ads

Big Tech attempts to battle deepfake videos

  • In October 2019, Twitter had said that it was working on a new policy to “address synthetic and manipulated media” on the platform and had sought comments on the same. Twitter is yet to come out with the policy.
  • In September 2019, Facebook and Microsoft announced the Deepfake Detection Challenge (DFDC) to produce technology that can be used to detect a deepfake videos.
    • In October 2019, Amazon Web Services said that it would work with Facebook and Microsoft on the challenge and contribute up to $1 million in AWS credits to researchers and academics over the next two years.
  • In September 2019, Google released a large dataset of visual deepfakes in order to aid researchers to directly support deepfake detection efforts.

*Update: This story was updated with Facebook’s response to our queries. The previous version of the post has been archived here