wordpress blog stats
Connect with us

Hi, what are you looking for?

Microsoft’s Blueprint For AI Governance in India: Testing High-Risk AI Systems, Licensing For Credibility, Promoting Transparency, And More

Among other things, Microsoft suggests developing new approaches and frameworks for the regulation of highly capable AI foundation models, like imposing licensing requirements on datacentre operators.

On August 23, Microsoft published a report detailing a blueprint for AI governance in India, which primarily delves into a five-point approach for framing laws, policies, and regulations for ensuring accountability of AI systems.

While the first part of the report provides an in-depth explanation of each of the recommended points of regulation, the second part gives an insight into Microsoft’s internal methods employed to ensure responsible development of AI tools. Additionally, the report also includes case studies from India, wherein the Indian government as well as non-governmental entities have partnered with Microsoft to develop AI-enabled tools in different sectors.

What does the five-point blueprint recommend?

As countries are working to develop a framework for governance of AI, Microsoft observes that there’s no one right way to do so and that a multi-layered approach is essential to cover questions and issues that are concerning to many currently.

1. Leveraging existing frameworks for AI risk identification and mitigation

Firstly, Microsoft recommends building upon existing or emerging governmental frameworks for ensuring AI safety. “A key element to ensuring the safer use of this technology is a risk-based approach, with defined processes around risk identification and mitigation as well as testing systems before deployment,” the report states.

For example, the ‘AI Risk Management Framework‘ developed by the U.S. National Institute of Standards and Technology (NIST), provides a template for identifying risks posed by an AI system and defining processes for testing as well as mitigation methods. Similarly, India’s National Strategy for Artificial Intelligence also identifies key challenges for AI adoption in India, which may be relevant for other countries too, such as lack of skilled expertise, inadequate investment, need for data governance regulation, etc.

The paper also emphasizes improving existing government procurement mechanisms to assess the quality of products for development of AI systems. “Governments could explore inserting requirements related to the AI Risk Management Framework or other relevant international standards into their procurement processes for AI systems, with an initial focus on critical decision systems that have the potential to meaningfully impact the public’s rights, opportunities, or access to critical resources or services,” the paper notes.

Advertisement. Scroll to continue reading.

2. Regulating AI systems that control critical infrastructure

The report calls for implementing “safety brakes” to ensure that AI systems that manage critical infrastructure and, thus, may pose significant risks to the public, remain under human control. These may include systems that manage or control “infrastructure systems for electricity grids, the water system, emergency responses, and traffic flows in our cities.”

Defining High-Risk AI Systems: The report recommends that the government define a class of high-risk AI systems and deploy safety brakes as part of a “comprehensive approach to system safety.  It is important to note that even within critical infrastructure sectors, there may be low-risk systems that may not require the same level of safety measures. Governments can focus on AI systems that affect large-scale networked systems, operate autonomously or semi-autonomously, and pose significant harm, including physical, economic, or environmental harm.

Built-in safety brakes for systems: “While the implementation of ‘safety brakes’ will vary across different systems, a core design principle in all cases is that the system should possess the ability to detect and avoid unintended consequences, and it must have the ability to disengage or deactivate in the event that it demonstrates unintended behavior,” the report notes. This essentially means that system developers must be required to ensure that safety brakes are built by design into the use of AI systems across sectors.

Testing and monitoring high-risk systems: Operators must be required to test and monitor high-risk systems to make sure that the functioning remains under human control. The report states that system testing may also depend on the advancement in the use of a product or service. Testing, verifying, and validating systems rigorously will require regular coordination between operators, AI infrastructure providers, and the “regulatory oversight bodies”.

Establishing licensing mechanisms for critical AI systems: Microsoft states that AI systems that control critical infrastructure must be deployed only in “licensed AI infrastructure.” “Critical infrastructure operators might build AI infrastructure and qualify for such a license in their own right. But to obtain such a license, the AI infrastructure operator should be required to design and operate their system to allow another intervention point—in effect, a second and separate layer of protection— for ensuring human control in the event that application-level measures fail,” the paper recommends.

Article continues below ⬇, you might also want to read:

Advertisement. Scroll to continue reading.

3. Technological Architecture of AI

The paper pitches for a “legal and regulatory” architecture that reflects the “technology architecture” of the AI itself. This means that governments must take a multi-layered approach through policy and place regulatory responsibilities upon different actors depending on the role they play in developing and managing AI technology.

What is a technology architecture? The paper explains, “Software companies like Microsoft build a “tech stack” with layers of technologies that are used to build and run the applications that organizations and the public rely upon every day.” The definition of tech stack may vary for different developers. In the chart below, AI-model like GPT-4 is built by researchers and developers based on the two layers below it, that is by utilizing machine learning software and advanced computing capabilities. The first layer represents the end-use applications where these AI models are actually put into use.

Image Source: Microsoft’s paper, ‘Governing AI: A Blueprint for India’

Microsoft recommends framing laws and regulations that focus on the three layers of the tech stack: the AI Datacentre Infrastructure, powerful pre-trained AI models, and finally, the applications.

Image Source: Microsoft's paper

Image Source: Microsoft’s paper, ‘Governing AI: A Blueprint for India’

a. Applying existing legal protections at the applications layer to the use of AI:

Given that the applications layer will determine the use of AI in different sectors, it is here that the rights and safety of people will be most impacted. Hence, there’s a need to govern the application of output from AI models at this stage. The paper observes that existing laws related to privacy, telecom, data, and technologies can be applied and enforced to protect people’s rights in the AI sector too. Additionally, identifying real-world impact of AI on individuals and societies will need a multi-stakeholder approach, including efforts by government as well as private entities, such as:

  • Tech companies can begin by assisting customers in the application of best practices to deploy AI lawfully and responsibly.
  • Regulatory agencies will need to add new AI expertise and capabilities.
  • As Microsoft suggests, companies can also support initiatives to make information about AI technologies and responsible AI practices available and accessible to legislators, judges, and lawyers.

b. Developing new laws and regulations for highly capable AI foundation models

While the application of AI models can be governed by existing laws, the paper highlights the need for new approaches for the two additional layers, that are required for regulation of the powerful pre-trained models and licensing mechanisms for deployment of these models in advanced data centers. Along with other leading AI developers, Microsoft claims to assist governments in the following:

Knowledge-sharing about advanced AI models: This will help governments define the regulatory threshold. The paper notes that one of the initial challenges will be to define which AI models should be subject to this level of regulation. Microsoft talks about regulating “highly-capable models”, which is a small subset of AI models with advanced capabilities.

Supporting governments to define licensing requirements:  This will be applicable for developing or deploying a highly capable AI model. A licensing regime must fulfill safety and security objectives, establish a framework for coordination and information flows between licensees and the regulator, and lastly, provide a “footing for international cooperation between countries” with shared safety and security goals.

Imposing licensing requirements on datacentre operators: The paper informs that AI datacentres are “critical enablers” of highly capable AI models and hence, “an effective control point in a comprehensive regulatory regime.” Microsoft states that this can be done by imposing licensing requirements on the operators of AI datacenters that are used for the testing or deployment of AI models. “To obtain a license, an AI datacenter operator would need to satisfy certain technical capabilities around cybersecurity, physical security, safety architecture, and potentially export control compliance,” the paper adds.

Advertisement. Scroll to continue reading.

4. Non-profit and academic access to AI

The paper notes that the relationship between security and transparency emerges as one of the most critical points of discussion while framing AI-related policies. For instance, confidentiality of the technical design of a particular AI model may be important for security, but transparency of such information may be essential for developing best safety practices. Therefore, the tension may exist in some cases, and in others, it may not.

In the report, Microsoft has committed to the following for promoting transparency:

  1. Releasing an annual transparency report to inform the public about its policies, systems, progress, and performance in managing AI responsibly and safely.
  2. Supporting the development of a “national registry of high-risk AI systems” which will be open for inspection so that people can know where and how those systems are in use.
  3. Commitment to ensure that the company’s AI systems are designed to inform the public when they are interacting with an AI system and that the system’s capabilities and limitations are communicated clearly.
  4. Requiring AI-generated content to be labeled in important scenarios so that the public “knows the content” it is receiving.

Additionally, in order to encourage public-oriented research that focuses on advancing AI accountability and studying new AI models adopted by industry actors, Microsoft is looking to support the establishment of the National AI Research Resource (NAIRR) in the US to provide computing resources for academic research. The NAIRR services would also be extended to India through the National Data Governance Framework Policy. Secondly, the company also plans to increase investment in academic research programs and create “free and low-cost” AI resources for the non-profit community.

5. Encouraging public-private partnerships to use AI 

The paper emphasizes the public and private sectors to collaborate in order to explore the possibilities of AI technology as well as address its impact on society.

“Important work is needed now to use AI to strengthen democracy and fundamental rights, provide broad access to the AI skills that will promote inclusive growth, and use the power of AI to advance the planet’s sustainability needs. In each area, the key to success will be to develop concrete initiatives and bring governments, industry, and NGOs together to advance them. Microsoft will do its part in each area,” the paper adds.

Where is AI being used in India?

The report lists initiatives and applications across sectors in India, where Microsoft’s AI services are being used. Here are a few important use cases:

  1. MyGov Saathi: According to the report, MyGov Saathi was a chatbot created in 2020 to communicate key healthcare information with Indian citizens during the COVID-19 pandemic. The chatbot is developed using Microsoft’s Power Virtual Agents and uses AI to answer questions and provide key healthcare resources to its users. The scope of the chatbot is now expanded to disseminate information about governance-related services.
  2. AI4Bharat: Microsoft is collaborating with India’s AI4Bharat, a research lab sponsored by Nandan Nilekani, to train AI models to “recognize, interpret, and transcribe the world’s sign languages”.
  3. Ashoka Trust for Research in Ecology and Environment (ATREE): As per the report, in order to assist conservation efforts, ATREE is using Azure AI tools to “map and document” the ecology of the northeastern regions of India.
  4. Jugalbandi: The Indian government had collaborated with Microsoft to develop Jugalbandi, a generative-AI powered chatbot, to provide Indians access to information on 171 government programs in 10 of the 22 official Indian languages.
  5. International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) Sowing App: ICRSAT partnered with Microsoft to develop the AI Sowing App, which is powered by “Microsoft Cortana Intelligence Suite including Machine Learning and Power BI”. The app is used to inform or rather advise farmers about the best time to begin sowing the seeds without having to invest in “advanced sensors” for predicting the environmental and groundwater conditions.

STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


Advertisement. Scroll to continue reading.
Written By

Curious about privacy, surveillance developments and the intersection of technology with education, caste and welfare rights.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...


RBI Deputy Governor Rabi Shankar called for self-regulation in the fintech sector, but here's why we disagree with his stance.


Both the IT Minister and the IT Minister of State have chosen to avoid the actual concerns raised, and have instead defended against lesser...


The Central Board of Film Certification found power outside the Cinematograph Act and came to be known as the Censor Board. Are OTT self-regulating...


Jio is engaging in many of the above practices that CCI has forbidden Google from engaging in.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ