The National Association of Software and Service Companies (NASSCOM) released a set of guidelines focused on the research, development, and use of generative artificial intelligence (AI) on June 5. These guidelines, reviewed by MediaNama, are meant to build stakeholder consensus on the obligations of those engaged in AI development and use. They come only a week after INDIAai (India’s national AI portal—established by the IT Ministry, National eGovernance Division, and NASSCOM) released its report on generative AI.
Why it matters:
The risks surrounding AI are a hot topic in today’s time. Just last week, a group of leading AI experts including top researchers and CEOs signed a statement saying that—‘Mitigating the risk of extinction from AI should be a global priority’. Consequently, governments across the world have been formulating rules and regulations to deal with the risks associated with AI.
India, on the other hand, has decided to not form specific legislation and instead intends to regulate AI through the Digital India Act, 2023. Given the lack of specific legislation, these guidelines issued by NASSCOM could serve as a framework and help AI developers and researchers formulate best practices to effectively mitigate the harms associated with generative AI. What one is left to wonder is how exactly, if at all, will the various stakeholders implement these guidelines.
STAY ON TOP OF TECH POLICY: Our daily newsletter with top stories from MediaNama and around the world, delivered to your inbox before 9 AM. Click here to sign up today!
What do the guidelines say:
- Defining the harms of generative AI: The guidelines identify the following harms associated with generative AI use— the proliferation of misinformation, infringement of intellectual property, data privacy concerns, propagation of political and social bias, job displacement and loss of livelihood, rise in cyber attacks and environmental impact of AI development and use.
- Obligations of researchers: Researchers must demonstrate foresight and anticipate both the negative and positive impacts that may arise from the research. They must disclose the values that are driving a research project. They must also disclose the methodologies, model training datasets, and tools used for research. Researchers should adhere to privacy-preserving norms in every step of the process from data collection to processing and usage. They must mitigate the risk of bias by deploying protocols, publishing research findings in open-source formats, and democratizing the framing of problem statements. They must also prioritize research on AI tools that can have the maximum positive impact on human agency.
- Obligations for developers: The developers must conduct comprehensive risk assessments and internal oversight throughout the development of an AI tool. They must retrain risk and compliance officers and ethics council members and prescribe terms of service/guidance for the safe use of the AI tool by individual users and application developers all while disclosing the questionable uses of their AI tool. Developers must publicly disclose the data and algorithm sources as well as other non-proprietary information about the AI’s development process. Such disclosure must only be withheld if there is reasonable concern that the disclosure would lead to malicious use of the solution. Developers must prove such concern to the satisfaction of the regulator under whose jurisdiction the generative AI falls. They must demonstrate the safety of their generative AI by adhering to intellectual property rules in the collection, processing, and usage of training data. They must also follow industry best practices in designing, developing, and deploying generative AI models. This can be accomplished by feeding the AI model with contextual awareness during the design process, the design and development of the model by diverse and multidisciplinary teams, and human in the loop (an AI approach where human beings need to intervene to get the AI to perform a task) design. Developers must make it technically feasible to furnish explanations of AI outputs in high-stakes situations. They must have in place a mechanism for grievance redressal to deal with the mishaps caused by the development/deployment of a generative AI model. Generative AI models must align with the goal of human progress and must prioritize energy efficiency.
- Obligations for those using generative AI for commercial/non-commercial purposes: NASSCOM requires users of generative AI to balance the displacement of the workforce (induced by the AI tool) with proportionate investments in worker upskilling and reskilling programs. Users must publicly disclose all technical, non-proprietary information about the development process, capabilities and limitations of downstream generative AI models (models that come after/developed from the original AI model) and applications. They must also demonstrate transparency about the use of the model and about the deliverables generated within an academic or commercial setting from using the model. They must use the model in compliance with its terms of service and applicable public regulations. They should ensure that the downstream models are developed in compliance with industry best practices and must not use the solution to infringe on the rights of others or to propagate disinformation/harmful biases. Users must exercise caution in using the content generated by the AI and should refrain from sharing any personally identifiable/confidential information with the AI. Corporate/institutional safeguards must be enforced to prevent misuse/unauthorized use of the solution
- Joint obligations for all three groups: They should support universal AI literacy and awareness programs and regulatory reform projects surrounding generative AI.
This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.