On January 17, MediaNama conducted a discussion on Deep Fakes and Democracy. The discussion focused broadly on the implications of deep fakes for the upcoming general elections, the technical and policy solutions to tackling deep fakes, positive use cases of deep fakes, and how deep fakes threaten user verification.
Our objective was to identify:
- Is there a legitimate use case for deep fakes in elections, or should they be outlawed altogether?
- What are the capabilities of political parties to generate deep fakes? What are their dissemination networks like
- What are the challenges of attributing deep fakes or even fake news to a bad actor?
- How can the spread of deep fakes be curbed on end-to-end encrypted platforms like WhatsApp?
- Is there a need for a consumer app-level deep fake detection?
- Has there been enough deep fake activity in the recent elections? What is the current environment of deep fake-like?
- What are the challenges with detecting deep fakes once something is uploaded on social media?
- Is watermarking an effective solution to curbing deep fakes?
- How does safe harbor play out in case of deep fakes? Should platforms lose their immunity if deep fake content is posted by users on their service?
Download a copy of the event report
Executive Summary
Ahead of elections in multiple different jurisdictions, the proliferation of deep fakes and their potential to spread misinformation has emerged as a major concern. Speakers at MediaNama’s discussion pointed out that the current measures to curb deep fakes like watermarking or content provenance have limitations, with bad actors being capable of removing watermarks and faking provenance. They highlighted that one of the reasons for the limited success of detection techniques is that once the adversaries learn about the detection techniques, they improve their generation capabilities.
Another challenge to detection is the circulation of deep fakes on social media platforms. It was pointed out that an audio deep fake detection algorithm, which may have 90 percent accuracy in lab settings, becomes less effective when asked to detect a deep fake circulating on social media because platforms transcode the audio and video, thereby changing its properties, making it difficult to compare with the original version.
Discussing the responsibility of social media platforms in curbing deep fakes, it was found that platforms have two main tools in their arsenal—content takedowns and shadow banning. Speakers called attention to the importance of shadow banning before any final decision is made about a piece of content, suggesting that this would prevent the algorithmic amplification of the deep fake. Discussants pointed out that while social media platforms are capable of curbing child sexual abuse material (CSAM) to a certain extent, curbing deep fakes becomes a challenge because they cannot create databases to cross-reference the deep fake, which is how they currently take down CSAM. When considering the time-sensitive nature of curbing misinformation, it was highlighted that platforms have adhered to a two-hour takedown timeline in previous elections as a part of their voluntary code of ethics that social media platforms have agreed to follow during general elections.
When considering the impact of deep fakes on the upcoming elections, discussants pointed out that this technology would make it easier for political parties to flood people’s social media feeds with their desired narrative. Consequently, this flood of deep fakes would also flood any reporting system that the Election Commission of India (ECI) would create, overwhelming it and making it impossible for the ECI to act against any of the complaints it receives. It was also mentioned that the flood of misinformation created by deep fakes is similar to fake news that was spread in past elections, and will be spread through sources that cannot be tied to political parties, making it hard for the ECI to hold any party responsible. Discussants expressed that international actors could also make attempts to influence election results using deep fake-generated misinformation.
The discussion also covered policy solutions to deep fake misinformation, wherein it was pointed out that the spread of deep fakes generated outside the nation-state should be curbed by autonomous bodies like the ECI. Self-regulation was discussed as a measure against deep fakes with speakers arguing that it could only be applied on a case-by-case basis.
Discussants said that fact-checkers can struggle to call out deep fakes that are furthering a political agenda under the guise of satire. In certain cases, a deep fake intended as satire could be clipped out and spread as fake news as well. It was mentioned that previous attempts to debunk satire, which was being mistaken for real events, had landed fact-checkers in legal trouble. This makes fact checkers cautious of looking into content that has been marked as satire, even if said content is pushing a political narrative.
The discussion established that deep fakes would pose a risk to practices like video know-your-customer (KYC) verification. Speakers pointed out that there have been instances of job candidates who have hidden their identities in interviews using deep fakes. It was pointed out that if biometrics were used as a means of identification, they would have to constantly evolve to keep pace with advancements in deep fake technology.
Video and coverage:
About the discussion
Speakers:
- Rakesh Maheshwari (Former Sr. Director and Group Coordinator, MeitY)
- Saikat Datta (CEO and Co-founder of Deepstrat)
- Jency Jacob (Managing Editor, Boom Fact Check)
- Carl Gautham Koorma ( Researcher, UC Berkley School of Information)
- Tarunima Prabhakar (Co-founder of Tattle Civic Technologies)
- Shivam Shankar Singh (Data Analyst and Political Campaign Consultant)
Participation:
We saw participation from companies and organizations like Samsung, HDFC Bank, Info Edge, Ministry of Electronics and IT, The Quantum Hub, Apollo 247, COAI, Thompson Reuters, Ikigai Law, Access Now, Truecaller, SLFC, Outlook, Meta, ShareChat, NDTV, EGaming Federation, Chase India, The Caravan, the Hindu, CCG NLUD, DataLEADS, University of Exeter, Center for Civil Society, SFLC, Google, Spotify, InShorts, Deloitte, Internet Society, The Internet Freedom Foundation, Logically.ai, Mozilla Foundation, Times Internet Limited, News Click, Mogambay India, The Asia Group, LT Mindtree, IndusLaw, Times Internet, CCAOI, Hasgeek, Citizen Digital Foundation, Dvara Research Junglee Games, among others.
Support and partners:
MediaNama hosted this discussion with support from Google and Meta.