wordpress blog stats
Connect with us

Hi, what are you looking for?

Highlights from OpenAI and Quantum Hub’s discussion on AI ahead of 2024 elections

The discussion, conducted under the Chatham House Rule, was aimed at helping the OpenAI team equip themselves with the interventions needed to combat potential harms associated with AI ahead of the elections.

“I think at present, what best can be done by OpenAI is not to draw attention to itself as the representative of the entire sector, but lead by a coalition-based approach,” a participant said at a closed-door discussion conducted by OpenAI and The Quantum Hub on February 12. He mentioned that OpenAI should emphasize the role of civil society and human rights organizations to have a seat on the table when discussing ways to mitigate the risks associated with artificial intelligence (AI).

This discussion focused on AI in the context of the upcoming general elections in India and featured participants from various fields including fact-checking, civil rights, and public policy. It was aimed at helping the OpenAI team equip themselves with the interventions needed to combat potential harms associated with AI ahead of the elections. The discussion was conducted under the Chatham House Rule and as such the identity of those participating in the discussion will be kept confidential.

Key points made during the discussion:

AI companies will soon see questions about electronic voting machines (EVMs):

One of the major questions that will come to AI models soon enough is whether EVMs are safe to use, a participant pointed out. The participant suggested that OpenAI needs to think about how it will respond to questions surrounding EVMs. The participant gave the example of YouTube where any video which has anything to do with EVMs typically has a Wikipedia link directing the viewer to more information about the machines.

It would be wrong to block off election information from generative AI tools:

One of the participants explained that AI tools have been extremely helpful for persons with disabilities. For instance, they can provide those with visual impairments descriptions of images or aid those with locomotive disabilities write text. “I know a lot of persons with disabilities for just going into an AI software because they require less search, et cetera, and navigating long websites, et cetera becomes difficult for them. The way it condenses information becomes very useful for them.” the participant said.

The participant mentioned that AI would also be useful for people with disabilities during the election period as well because for a lot of people with disabilities, “just that one platform, whether it’s OpenAI or any other AI platform, is the go-to-place for all news and information.” As such, if election-related content is completely blocked off from AI tools, it could pose a significant problem for those with disabilities.

AI has the potential to push certain biased narratives:

A participant brought up the Indian AI model Krutrim and explained that its founder Bhavish Aggarwal had asked it when India became a country to which the AI responded that India had been an ancient civilization. “What I’m seeing right now, which brings in from this point, is that [the] current government is extremely nationalistic and divisive in nature. So they are encouraging people to build models which push their narrative more,” the participant explained, adding that AI companies would have to consider such biases and how they should be navigated. Another participant added stating that while AI companies may try to align the responses of their models to cultural sensitivities, it would be challenging to do so in cases where cultural norms are very diverse.

Another participant added that this narrative building doesn’t only occur during the election period and instead, campaigning happens every minute of every day. “The time being utilized to set a narrative is also being done by distorting or changing the sources where the information comes from about our own identity and culture,” the participant mentioned adding that it would be useful if OpenAI’s models could mention their sources of information.

On educating people about AI tools:

One of the participants pointed out that while there is a lot of excitement surrounding AI in India, “there is a very broad sense of not really knowing what AI is.” Another added that for a lot of people, AI is effectively a chatbot. “We have had chatbots before people have tried them. What goes behind it and the intelligence that you guys bring in is lost,” the participant said, suggesting that this lack of understanding should be factored into media literacy campaigns.

Another concern that was pointed out in the context of AI literacy was that people should be taught how to look for identifiers of AI-generated content. “So in India, I think FSSAI [Food Safety and Standards Authority of India] came up with that green dot. So if you’re eating a veg meal, you know it with a green dot or a red dot, green or red. So that’s a mass campaign to say, look for this sign on the image or look for this particular identification mark in your video or audio [that should be conducted],” a participant said.

AI companies need to help fact-checkers define deep fakes:

Fact-checkers participating in the discussion pointed out that one of the key challenges they face concerning AI is defining the various categories of AI-generated content. “Because when we work with platforms, the enforcement for each of these categories [deep fakes, synthetic media, etc] can differ,” one of them explained. The participant brought up the recent use of AI by former Pakistani Prime Minister Imran Khan, who gave an AI-generated speech from his jail cell. “Could we now actually go out and say that that is a deep fake because it’s not actually Imran Khan, but it was rightfully labeled as AI-generated content. It was from his official handle,” the participant explained.

Bad actors can bypass attempts to prevent deep fake generation:

A participant pointed out that while OpenAI has put in place filters to prevent DALLE from creating images of real-life individuals (deep fakes), there are “budding communities on Reddit, 4chan, 8kun, who are dedicated to figuring out prompts that could evade these filters.” [Quick context: In January this year, OpenAI announced that its image-generating DALL·E model “has guardrails to decline requests that ask for image generation of real people, including candidates.”]

Deep fakes could discourage women from contesting elections:

“Women have been at the receiving end of a lot of discouragement, generally, to be in politics. It is called dirty and unsafe at every single point. Now, the dirtiness of it is also the fact that you will be targeted, there will be misogyny, there will be phobia around your identity,” a participant explained. The participant mentioned that this targeting could either be through general information about women and now also through the use of deep fakes. The participant added that only a small percentage of Indian women have access and understanding to technology and as such, wouldn’t be able to tackle what’s coming their way through deep fake technology.

Deep fakes will make elections expensive:

Participants argued that AI could make elections very expensive for political parties. Those who have access to AI tools would be more successful at making headway in communicating with voters and microtargeting them than those who lack access to the tools.

Note: The story was edited on February 29, 2024 at 1:55 PM to make the section on AI biases clearer.

Also read:


STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


Written By

Free Reads


Apple issued a statement saying that the company did not attribute these threat notifications to any specific state-sponsored attacker.


Meta is als testing new pop-up messages for those who may have previously interacted with any account(s) removed for sextortion.


In 2017, the Consumer Unity & Trust Society (CUTS) released a briefing paper suggesting that broadband nutrition labels should be adopted to ensure that...

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



NPCI CEO Dilip Asbe recently said that what is not written in regulations is a no-go for fintech entities. But following this advice could...


Notably, Indus Appstore will allow app developers to use third-party billing systems for in-app billing without having to pay any commission to Indus, a...


The existing commission-based model, which companies like Uber and Ola have used for a long time and still stick to, has received criticism from...


Factors like Indus not charging developers any commission for in-app payments and antitrust orders issued by India's competition regulator against Google could contribute to...


Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ