Marking the third anniversary of India’s national AI portal—established by the IT Ministry, National eGovernance Division and NASSCOM (National Association of Software and Service Companies)—the INDIAai team launched a report on generative AI on May 30, 2023. The report ‘Impact, opportunity and challenges of generative AI’ is based on the roundtable discussions with industry stakeholders and “ecosystem players” about the current generative AI landscape, ethical and legal challenges associated with it and solutions to mitigate the risks anticipated with the use of such technology.
The roundtables included representatives from organisations like the Indian Institute of Science, Ikigai Law, Global AI Ethics Institute, IBM Research Lab – India and other tech companies. The second roundtable, which focused on the regulatory framework for generative AI involved representatives from the Supreme Court of India, UNESCO, OdiseIA and other legal firms.
What is INDIAai?
INDIAai is a platform or a “unified AI ecosystem” providing all resources on AI developments in India and the world for entrepreneurs, students and academics among others. Currently, the portal is considered as key content repository for the National Programme on AI.
STAY ON TOP OF TECH POLICY: Our daily newsletter with top stories from MediaNama and around the world, delivered to your inbox before 9 AM. Click here to sign up today!
Overview
The initial parts of the generative report published by the INDIAai team offer elaborate explanations for understanding the fundamentals of large language models and the history of generative AI. It also talks about popular generative AI models like ChatGPT, DALL-E, Midjourney, Chatsonic, Jasper Chat, etc. The report also includes a section on how generative AI would impact employment in different sectors. It outlines the opportunities to use generative AI models in the agriculture, healthcare, education, retail, marketing and media sectors. Further, the report includes a section on challenges associated with using generative AI applications and how different countries are trying to address them. Finally, it lists out the recommendations made by the participants at the roundtables for responsible use of AI in India.
On challenges and regulation:
The report also briefly points out some of the main existing challenges that follow the deployment of generative AI applications, which include:
- Capabilities of generative AI to amplify misinformation
- Bias in AI models due to the kind of datasets used and systemic prejudices
- Legal disputes related to copyright and intellectual property arising out of AI model training
However, the report does not offer greater insights into India-specific risks or challenges that need attention at present. The researchers also highlight the need for a “holistic and a comprehensive framework” for ensuring responsible use of generative AI technology and to mitigate the risks. “This framework shall include ethical guidelines, regulatory measures, transparency, accountability, and ongoing collaboration among stakeholders,” the report adds.
What’s missing?
The INDIAai report on generative AI is probably the first one from India on the subject, but it fails to offer insights on India-specific concerns and high-risk factors that need to be addressed at the foundational stages of deploying generative AI technology for governance or other operations. The report emphasises opportunities in this space but does not really delve into sector-specific risks and misuse that need attention. These risks can be the use of deep fakes during elections, misinformation on social media, and cybercrimes involving impersonation amongst others. For example, a report by the EU’s law enforcement agency gives a better insight into the criminal use cases of generative AI and the US government’s reference document for public consultation provides a nuanced understanding of the questions related to AI accountability that they are trying to address.
While the IndiaAI researchers mention efforts undertaken by China, the US and the European Union for regulating AI, there is no indication of India’s approach towards AI and whether or not the government has devised a plan for addressing imminent challenges associated with the use of generative AI in the country. IT Minister Rajeev Chandrasekhar recently said that the government is not planning to introduce any separate legislation to regulate AI, but will certainly include provisions in the upcoming Digital India Act that will govern the use of emerging technologies.
Why it matters:
Governments across the world have been responding to the generative AI boom in the last six months indicating plans to regulate AI as per country-specific priorities. The Indian government’s plans to address, if not regulate, questions related to generative AI and to support innovation in the AI space are still not clear. A report by INDIAai, which is an initiative of the IT Ministry, makes one curious about some of these questions that the report would shed light on. Additionally, it also indicates that one can expect more developments in this space.
Key recommendations:
The report lists recommendations drawn from the roundtables conducted by IndiaAI, which include:
- Education and public awareness: This includes educating people through workshops, campaigns and other initiatives about generative AI and its capabilities and addressing concerns about the tech taking over people’s jobs.
- Establish data sharing and usage regulations: Generative AI models operate on large sets of quality data. Stakeholders believe there’s a need to establish regulations to govern sharing and usage of such data in a manner that’s ethical, transparent and protects privacy rights.
- Regulatory frameworks: The report pitches for collaboration with international organisations to establish frameworks that would address regulation of generative AI and questions of AI accountability.
- Encourage self-regulation: The report also states that individuals and corporations developing generative AI products must adopt self-regulatory practices for responsible use of AI technologies.
- Prioritize bias mitigation: This refers to the efforts, including investment in R&D, to detect and mitigate bias through different techniques, which would ensure fair outcomes.
- Explore augmentation of human intelligence: Stakeholders recommend leveraging generative AI tools to augment human intelligence. Developers must conduct research and development in using generative AI as a tool to enhance human capabilities and “improve productivity across various sectors”.
- Address ethical concerns: This refers to the efforts that developers need to take in order to prevent the potential misuse of their generative AI models.
- Support accessibility and inclusivity: The report states that generative AI can be used to devise inclusive AI solutions that cater to the needs of speech-impaired individuals, autistic people, and others who may benefit from improved communication tools.
- Emphasize safety and control: With greater advancements in the generative AI space, it is essential for governments, organisations and developers to establish safeguards that prevent malicious use of technology.
- Monitor future enhancements: The report states that it is important to assess the implications of advancements in deep learning models with time and to ensure the “responsible and safe deployment” of generative AI technologies.
You can read the complete report here.
This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.
Also Read:
- MeitY Minister Rajeev Chandrasekhar Talks About AI Regulation Under The Digital India Act
- US Govt Seeks Feedback To Establish Rules For AI Regulation And Accountability Measures
- Indian Govt To Ramp Up Its Artificial Intelligence Game By Expanding INDIAai
- European Union To Introduce New Copyright Rules For Generative AI Tools In Its AI Act
