In light of approaching elections this year, MediaNama held a ‘Deep Fakes and Democracy’ event on January 17, 2024. The experts speaking at MediaNama’s event have had varying experiences with the phenomenon of deep fakes owing to its perception in the field of research, fact-checking, cybersecurity, etc. Most speakers agreed that the current methods to address the issue of deep fakes are insufficient. However, what caught our team’s attention was each speaker’s argument on why a certain method would fail and how. Some speakers even went as far as to question whether deep fakes deserve to be looked through the single lens of threat. Impressed by their takes on the topic, here’s a list of talking points collated by our team, highlighting some of the most interesting points from the discussion.
The problem isn’t the deep fake, it is the flood of misinformation [Kamya Pandey]
While a lot of concern has been raised about how real-seeming deep fakes could lead to the spread of misinformation, Shivam Shankar Singh, Data Analyst and Campaign Consultant, pointed out during our discussion that sophisticated deep fakes aren’t really required. During the election period, political parties essentially flood a user’s timeline with information that frames a candidate in a certain light, for instance, with corruption charges. The only thing a deep fake could accomplish is making it easier to generate diverse kinds of misinformation to alter a user’s perception of reality. Further, the election commission of India is also flooded with reports of these deep fakes, the sheer number of which overwhelms their system and makes it harder for them to take any decisive action. In such a situation, will something truly be accomplished by regulating deep fakes or fact-checking them?
Where’s the accountability for the perpetrators? [Nikhil Pahwa]
Shivam Shankar Singh emphasised that even where cases were filed against politicians for disinformation by the election commission, and even though those were few and far between, they were eventually dropped or withdrawn. On social media, as is also true on the internet, especially with the usage of surrogates for disinformation, attributing accountability is tough. The way regulation is panning out now, it appears that there are going to be no penalties for misinformation, and indeed deep fakes: only for platforms for not taking them down.
Are content takedowns an effective solution for curbing deep fakes? [Kamya Pandey]
Since November last year, when Rakshmika Mandana’s deep fake began circulating on social media, there has been ample talk from the IT Ministry about how platforms are likely to lose their safe harbor protection if they fail to take down deep fakes. The ministry relied on the IT Rules, 2021 for its content takedown demand of which Section 3(1)(b) requires intermediaries to inform their users not to host, and to make “reasonable efforts” to avoid hosting certain kinds of content, including that which is obscene, pornographic, paedophilic, invasive of another’s privacy, insulting, harassing, encouraging money laundering and impersonates another person. It gave platforms 24 hours to take down deep fakes. However, in our discussion speakers pointed out that the typical shelf life of a social media post is 4-5 hours, within which it has spread to most people. In such a situation, what is the point of getting the content taken down within 24 hours when the damage has occurred much before that time is up?
Why detecting deepfakes remains hard and risky [Sarvesh Mathi/Nikhil Pahwa]
Gautham Koorma, a researcher at UC Berkley School of Information, pointed out that the detection of deep fakes becomes much more difficult when they’re published on social media, because of the way platforms transcode the content. While researchers have been able to detect deepfakes with a 90 percent accuracy in lab settings, once out in the wild, these numbers fall significantly, because of this. Moreover, he emphasized that even a 90 percent accuracy isn’t good enough given the volume of deepfake content. Consider this: out of a million pieces of deepfake content, if 100,000 genuine content is wrongly marked as deepfakes, it poses serious concerns as there is now a second level of disinformation. Additionally, techniques used to detect Child Sexual Abuse Material (CSAM), such as hash detection, do not work as well for deepfakes for various reasons including the volume of content, modifications, as well as concerns around how such an application can be misused by authoritarian governments. With minor modifications, comparing hashes can become fruitless exercise. This means that on the whole, detecting deep fakes on social media is not possible with 100% accuracy, even if the deep fake is being compared with an existing dataset. Holding safe harbor to ransom is thus not the right approach.
The impact of deepfakes on elections is not yet clear and concerns might be misplaced [Sarvesh Mathi]
With elections in India and the US looming, the concerns about how AI and deepfakes can be misused are growing, but the impact of AI on elections is not yet clear because we are yet to see an AI-driven or deepfakes-driven election, Shivam Shankar Singh, Data Analyst and Campaign Consultant, pointed out. Moreover, concerns around the use of AI are misplaced in the sense that people are worrying about how different political parties might misuse AI to influence voters, whereas what’s more worrying is how foreign nations might use AI to drive elections in other countries for various geopolitical reasons, Saikat Datta, CEO and Co-Founder of DeepStrat, emphasised. And this type of sophisticated use of AI is harder to detect and prevent.
Deepfakes and identity verification processes – An unexplored problem [Sarasvati T]
Most of the discussions about deepfakes are largely focused on risks of misinformation and disinformation, especially in view of upcoming elections in several countries. What remains unexplored, but calls for immediate attention, is how deepfakes can be used to undermine identity verification processes, for example, the commonly used biometric authentication methods in India. As Saikat Datta, CEO of DeepStrat, pointed out, identity verification is critical from a regulatory perspective too, and while we haven’t yet witnessed any instances of how deepfakes can be used to exploit the existing verification systems, the problem is imminent. Increasing Aadhaar-enabled financial frauds in India have already proved that cloning Aadhaar biometrics and hacking the authentication mechanism is only getting easier for cybercrime perpetrators. Similarly, banks conduct video KYCs for customer identification. Given that deepfake generation is cheaper and easier, are sectoral regulators, RBI for example, prepared to deal with the impact of deepfake technology on biometric authentication for critical services?
What about the offline proliferation of printed deepfake content during elections? [Sarasvati T]
In the face of inadequacies presented by existing techniques like watermarking AI-generated content, the detection of deepfakes and other synthetic content on online platforms is already tricking researchers, technologists, and the social media companies themselves. Interestingly, as pointed out by panellist Saikat Datta: what about the offline proliferation of printed deepfake content, especially during elections? While detecting non-viral deepfakes and identifying fake or false news during elections is an existing challenge faced by platforms and authorities like the Election Commission of India, how will they detect and regulate the distribution of posters, print ads, pamphlets, etc. which may be just a printed version of online deepfake images. These, if misused, can have a larger impact on hyper-local election narratives in India and in regions, where access to the internet is still an unresolved issue.
Deep fakes should not be viewed in isolation [Vallari Sanzgiri]
During the discussion, Saikat Datta, CEO of Deepstrat, warned how the current deep fake issue is not just a standalone problem but a phenomenon that can be combined with existing scam methods. For example, some of you may have heard of scams where a relative is allegedly calling you from a stranger’s phone and urgently asking you to transfer money. Here, deep fakes can be used to imitate the voice of one’s relative. These impersonations have become sophisticated thanks to generative AI’s abilities to copy a person’s voice or face. As such it makes sense to heed Datta’s advice that one needs to look at deep fake not just as an isolated problem but as something that can overlap and worsen existing issues.
Is there a foolproof way to fact-check deep fakes? [Vallari Sanzgiri]
In preparation for 2024 elections in various countries, Open AI and Microsoft have announced they’ll be implementing the Coalition for Content Provenance and Authenticity’s digital credentials to watermark content created by generative AI. It is important for such companies to adopt some measures in light of elections. However, as another speaker Gautham Koorma, a Researcher at UC Berkley School of Information, pointed out: watermarks can be removed by even “not-so-sophisticated” adversaries. At times, even simple photo-editing tools are enough to remove watermarks. What does this mean? Generative AI and its consequences are still a developing topic and so, while watermarks can certainly help to an extent, individuals also need to keep a look out for other signs in a content piece like irregular lighting, distortion of visuals at certain points, etc.
Tarunima Prabhakar, Co-Founder of Tattle Civic Technologies, even went so far as to suggest that fingerprinting might be a useful technique to detect fabricated content although that is not a long-lasting solution. Meanwhile, Jency Jacob, Editor of BoomLive, argued that even if effective detection tools are created for deep fakes, they may not be available to fact-checkers at a cost-effective rate. So, at least for now, the Election Commission of India should direct political parties not to use deep fake videos ahead of key elections. This allows for a more reasonable restriction approach that is more cautious and desirable.
Could a graded, risk-based approach help regulate deep fake applications better? [G. Aarathi]
Yesterday’s discussion also saw MediaNama’s Editor Nikhil Pahwa ask if satirical deep fakes would also be impacted by strict government regulation. Another panellist, BoomLive’s Jency Jacob, spoke of teachers using deep fakes to teach their students tough subjects in innovative ways. These two use cases highlight a less talked about aspect of the technology—that in some cases, it can actually be used positively. This makes me wonder if the Indian government’s current approach to regulating deep fakes inadvertently harms positive use cases. For example, currently, companies notified of deep fakes on their platform have to take them down within the timelines prescribed by the IT Rules, 2021. Appeals to these decisions can be made to appellate authorities established under the rules. While this approach is tailored to the current regulatory framework’s realities, future government policies could consider a graded approach to regulating deep fake take-down decisions, based on the harm they are likely to cause society. Take-down decisions or appeals could be monitored by the IT Ministry or bodies underneath it. For example, a satirical video shedding light on social issues, or a teacher’s attempts to keep their class engaged, could fall on the lower end of the spectrum, and these parties may not need to take their content down. If flagged by a user or authority, such content pieces may simply be required to display a disclaimer. On the other hand, deep fakes impersonating influential people during contested times (like elections) may be framed as high risk, with direct take-down orders required. Similar risk-based approaches like this one may help ensure that productive uses of technology aren’t curbed by rightfully concerned legislators.
Which laws apply to deep fakes printed out and stuck on a wall? [G. Aarathi]
As Sarasvati has already discussed, printed deep fakes plastered across the walls of India’s towns and cities will open up a Pandora’s box for regulators. However, the question I had when hearing Mr. Datta make this point yesterday was different: how do you actually regulate these offline versions of online deep fakes? The IT Rules which govern deep fake takedowns, quite obviously apply to the digital world—and surely no amount of legislative acrobatics could make them apply to content in ‘real life’. The consequence here may be that while the flagged content is taken down from social media platforms, it would remain freely accessible in physical public spaces, and thus capable of shaping voter perceptions. The fact that they remain publicly accessible (unless the Election Commission of India sets up a rapid action poster take-down unit) may also legitimise these offline offerings in the eyes of the public. Similar observations were raised by the Bombay High Court last year during hearings challenging the constitutionality of the Indian government’s proposed state-appointed unit that would fact-check government-related information online. Flagged ‘false’ information should ideally be taken down by platforms to comply with the IT Rules and retain safe harbour—while the same information conveyed through print would seemingly remain untouched by state censorship. “Is it being suggested that the same content in print will go through, but if online, is fake, false, or misleading?” Justice G.S. Patel asked back then. In more ways than one, the same question applies to this aspect of deep fakes as well.
Update (19 January, 9:50 am): Updated Tarunima Prabhakar’s comment on fingerprinting as a possible solution to detect deepfakes. Earlier, we wrongly referred to it as biometrics.