On August 23, Microsoft published a report detailing a blueprint for AI governance in India, which primarily delves into a five-point approach for framing laws, policies, and regulations for ensuring accountability of AI systems.
While the first part of the report provides an in-depth explanation of each of the recommended points of regulation, the second part gives an insight into Microsoft’s internal methods employed to ensure responsible development of AI tools. Additionally, the report also includes case studies from India, wherein the Indian government as well as non-governmental entities have partnered with Microsoft to develop AI-enabled tools in different sectors.
What does the five-point blueprint recommend?
As countries are working to develop a framework for governance of AI, Microsoft observes that there’s no one right way to do so and that a multi-layered approach is essential to cover questions and issues that are concerning to many currently.
1. Leveraging existing frameworks for AI risk identification and mitigation
Firstly, Microsoft recommends building upon existing or emerging governmental frameworks for ensuring AI safety. “A key element to ensuring the safer use of this technology is a risk-based approach, with defined processes around risk identification and mitigation as well as testing systems before deployment,” the report states.
For example, the ‘AI Risk Management Framework‘ developed by the U.S. National Institute of Standards and Technology (NIST), provides a template for identifying risks posed by an AI system and defining processes for testing as well as mitigation methods. Similarly, India’s National Strategy for Artificial Intelligence also identifies key challenges for AI adoption in India, which may be relevant for other countries too, such as lack of skilled expertise, inadequate investment, need for data governance regulation, etc.
The paper also emphasizes improving existing government procurement mechanisms to assess the quality of products for development of AI systems. “Governments could explore inserting requirements related to the AI Risk Management Framework or other relevant international standards into their procurement processes for AI systems, with an initial focus on critical decision systems that have the potential to meaningfully impact the public’s rights, opportunities, or access to critical resources or services,” the paper notes.
2. Regulating AI systems that control critical infrastructure
The report calls for implementing “safety brakes” to ensure that AI systems that manage critical infrastructure and, thus, may pose significant risks to the public, remain under human control. These may include systems that manage or control “infrastructure systems for electricity grids, the water system, emergency responses, and traffic flows in our cities.”
Defining High-Risk AI Systems: The report recommends that the government define a class of high-risk AI systems and deploy safety brakes as part of a “comprehensive approach to system safety. It is important to note that even within critical infrastructure sectors, there may be low-risk systems that may not require the same level of safety measures. Governments can focus on AI systems that affect large-scale networked systems, operate autonomously or semi-autonomously, and pose significant harm, including physical, economic, or environmental harm.
Built-in safety brakes for systems: “While the implementation of ‘safety brakes’ will vary across different systems, a core design principle in all cases is that the system should possess the ability to detect and avoid unintended consequences, and it must have the ability to disengage or deactivate in the event that it demonstrates unintended behavior,” the report notes. This essentially means that system developers must be required to ensure that safety brakes are built by design into the use of AI systems across sectors.
Testing and monitoring high-risk systems: Operators must be required to test and monitor high-risk systems to make sure that the functioning remains under human control. The report states that system testing may also depend on the advancement in the use of a product or service. Testing, verifying, and validating systems rigorously will require regular coordination between operators, AI infrastructure providers, and the “regulatory oversight bodies”.
Establishing licensing mechanisms for critical AI systems: Microsoft states that AI systems that control critical infrastructure must be deployed only in “licensed AI infrastructure.” “Critical infrastructure operators might build AI infrastructure and qualify for such a license in their own right. But to obtain such a license, the AI infrastructure operator should be required to design and operate their system to allow another intervention point—in effect, a second and separate layer of protection— for ensuring human control in the event that application-level measures fail,” the paper recommends.
Article continues below ⬇, you might also want to read:
- Jugalbandi, A Chatbot For Rural India By Microsoft And EkStep: What To Know And Think About?
- “Dilli Mein Sabse Swadisht Chole Bhature Kahan Milega?”: After Multilingual Updates, ChatGPT Answers In Hindi
- India AI And Meta Sign MoU To Collaborate On AI Technologies, Will Create Datasets For Indian Languages
- Key Recommendations On Use Of Generative AI In India By Industry Stakeholders: INDIAai Report
3. Technological Architecture of AI
The paper pitches for a “legal and regulatory” architecture that reflects the “technology architecture” of the AI itself. This means that governments must take a multi-layered approach through policy and place regulatory responsibilities upon different actors depending on the role they play in developing and managing AI technology.
What is a technology architecture? The paper explains, “Software companies like Microsoft build a “tech stack” with layers of technologies that are used to build and run the applications that organizations and the public rely upon every day.” The definition of tech stack may vary for different developers. In the chart below, AI-model like GPT-4 is built by researchers and developers based on the two layers below it, that is by utilizing machine learning software and advanced computing capabilities. The first layer represents the end-use applications where these AI models are actually put into use.
Microsoft recommends framing laws and regulations that focus on the three layers of the tech stack: the AI Datacentre Infrastructure, powerful pre-trained AI models, and finally, the applications.
a. Applying existing legal protections at the applications layer to the use of AI:
Given that the applications layer will determine the use of AI in different sectors, it is here that the rights and safety of people will be most impacted. Hence, there’s a need to govern the application of output from AI models at this stage. The paper observes that existing laws related to privacy, telecom, data, and technologies can be applied and enforced to protect people’s rights in the AI sector too. Additionally, identifying real-world impact of AI on individuals and societies will need a multi-stakeholder approach, including efforts by government as well as private entities, such as:
- Tech companies can begin by assisting customers in the application of best practices to deploy AI lawfully and responsibly.
- Regulatory agencies will need to add new AI expertise and capabilities.
- As Microsoft suggests, companies can also support initiatives to make information about AI technologies and responsible AI practices available and accessible to legislators, judges, and lawyers.
b. Developing new laws and regulations for highly capable AI foundation models
While the application of AI models can be governed by existing laws, the paper highlights the need for new approaches for the two additional layers, that are required for regulation of the powerful pre-trained models and licensing mechanisms for deployment of these models in advanced data centers. Along with other leading AI developers, Microsoft claims to assist governments in the following:
Knowledge-sharing about advanced AI models: This will help governments define the regulatory threshold. The paper notes that one of the initial challenges will be to define which AI models should be subject to this level of regulation. Microsoft talks about regulating “highly-capable models”, which is a small subset of AI models with advanced capabilities.
Supporting governments to define licensing requirements: This will be applicable for developing or deploying a highly capable AI model. A licensing regime must fulfill safety and security objectives, establish a framework for coordination and information flows between licensees and the regulator, and lastly, provide a “footing for international cooperation between countries” with shared safety and security goals.
Imposing licensing requirements on datacentre operators: The paper informs that AI datacentres are “critical enablers” of highly capable AI models and hence, “an effective control point in a comprehensive regulatory regime.” Microsoft states that this can be done by imposing licensing requirements on the operators of AI datacenters that are used for the testing or deployment of AI models. “To obtain a license, an AI datacenter operator would need to satisfy certain technical capabilities around cybersecurity, physical security, safety architecture, and potentially export control compliance,” the paper adds.
4. Non-profit and academic access to AI
The paper notes that the relationship between security and transparency emerges as one of the most critical points of discussion while framing AI-related policies. For instance, confidentiality of the technical design of a particular AI model may be important for security, but transparency of such information may be essential for developing best safety practices. Therefore, the tension may exist in some cases, and in others, it may not.
In the report, Microsoft has committed to the following for promoting transparency:
- Releasing an annual transparency report to inform the public about its policies, systems, progress, and performance in managing AI responsibly and safely.
- Supporting the development of a “national registry of high-risk AI systems” which will be open for inspection so that people can know where and how those systems are in use.
- Commitment to ensure that the company’s AI systems are designed to inform the public when they are interacting with an AI system and that the system’s capabilities and limitations are communicated clearly.
- Requiring AI-generated content to be labeled in important scenarios so that the public “knows the content” it is receiving.
Additionally, in order to encourage public-oriented research that focuses on advancing AI accountability and studying new AI models adopted by industry actors, Microsoft is looking to support the establishment of the National AI Research Resource (NAIRR) in the US to provide computing resources for academic research. The NAIRR services would also be extended to India through the National Data Governance Framework Policy. Secondly, the company also plans to increase investment in academic research programs and create “free and low-cost” AI resources for the non-profit community.
5. Encouraging public-private partnerships to use AI
The paper emphasizes the public and private sectors to collaborate in order to explore the possibilities of AI technology as well as address its impact on society.
“Important work is needed now to use AI to strengthen democracy and fundamental rights, provide broad access to the AI skills that will promote inclusive growth, and use the power of AI to advance the planet’s sustainability needs. In each area, the key to success will be to develop concrete initiatives and bring governments, industry, and NGOs together to advance them. Microsoft will do its part in each area,” the paper adds.
Where is AI being used in India?
The report lists initiatives and applications across sectors in India, where Microsoft’s AI services are being used. Here are a few important use cases:
- MyGov Saathi: According to the report, MyGov Saathi was a chatbot created in 2020 to communicate key healthcare information with Indian citizens during the COVID-19 pandemic. The chatbot is developed using Microsoft’s Power Virtual Agents and uses AI to answer questions and provide key healthcare resources to its users. The scope of the chatbot is now expanded to disseminate information about governance-related services.
- AI4Bharat: Microsoft is collaborating with India’s AI4Bharat, a research lab sponsored by Nandan Nilekani, to train AI models to “recognize, interpret, and transcribe the world’s sign languages”.
- Ashoka Trust for Research in Ecology and Environment (ATREE): As per the report, in order to assist conservation efforts, ATREE is using Azure AI tools to “map and document” the ecology of the northeastern regions of India.
- Jugalbandi: The Indian government had collaborated with Microsoft to develop Jugalbandi, a generative-AI powered chatbot, to provide Indians access to information on 171 government programs in 10 of the 22 official Indian languages.
- International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) Sowing App: ICRSAT partnered with Microsoft to develop the AI Sowing App, which is powered by “Microsoft Cortana Intelligence Suite including Machine Learning and Power BI”. The app is used to inform or rather advise farmers about the best time to begin sowing the seeds without having to invest in “advanced sensors” for predicting the environmental and groundwater conditions.
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!
