The Department of Telecommunications’ newly released framework of the Indian AI Stack admitted that the AI stack will suffer from algorithmic bias, if “contaminated” data is ingested into it. Incidentally, the paper suggested that having “open” AI algorithms, and “centrally controlling” data are among the ways to prevent algorithmic bias. The proposed Indian AI stack is essentially a six-layered stack, each handling different functions including consent gathering, storage, and AI/ML analytics.

The paper said that once the stack is fully developed, it will be structured across all sectors, including data protection, data minimisation, open algorithm frameworks, defined data structures, trustworthiness and digital rights, and data federation (a single database source for front-end applications), among other things. Comments to the draft paper can be emailed at aigroup-dot@gov.in or diradmnap-dot@gov.in, until October 3.

A committee under the DoT — AI Standardisation Committee — which has released this draft, had in October last year, invited papers on Artificial Intelligence, addressing different aspects of AI such as functional network architecture, AI architecture, and data structures required, among other things. At the time, the DoT had said that as the proliferation of AI increases, there is a need to develop an Indian AI stack so as to bring interoperability, among other things. The committee is headed by A. Robert J. Ravi, who is deputy director general at Andhra Pradesh Licensed Service Area, under the DoT, and has 11 members, all from the government; there are no private individuals in the committee. At the moment, it is unclear which entities submitted these papers, and how much of it was used in DoT’s draft paper on the AI Stack.

Why we need an AI Stack, according to the paper: “In the near future, AI will have huge implications on the country’s security, its economic activities and the society. The risks are unpredictable and unprecedented. Therefore, it is imperative for all countries including India to develop a stack that fits into a standard model, which protects customers; users; business establishments and the government,” the paper said. It said that going forward, AI will have an impact on several industries such as manufacturing, healthcare, and banking, among others. For governments it can help in rectifying cybersecurity attacks within hours, rather than months and national spending patterns can be monitored in real-time to instantly gauge inflation levels whilst collecting indirect taxes.

Dealing with algorithmic bias

In AI, the thrust is on how efficiently data is used, the paper said, noting that if the data is “garbage” then the output will also be so. “There is a need for evolving ethical standards, trustworthiness, and consent framework to get data validation from users,” the paper suggested. The paper said that by opening existing AI through open-sourcing code and placing related intellectual property into the public domain, “we can accelerate the diffusion and application of current techniques”.

It also suggested that data should be properly stored because contaminated data is an important factor in aiding biases in AI systems. Apart from that, it said that there is a need to change the “culture” so that coders and developers themselves recognise the “harmful and consequential” implication of biases.

MEITY, NITI Aayog also working on AI

DoT’s draft comes after the NITI Aayog released a draft working document on responsible AI, in June. Before that, the government had formed a committee to resolve differences over AI mission between MEITY and NITI Aayog. It has been tasked to remove duplication of work between various government arms and resolve the overlap between MEITY’s and NITI Aayog’s plan for AI. It will also specify the role of different agencies to fast-track the implementation of the AI missions.

MEITY had, in 2018, formed four committees on AI for citizen-centric data services; skilling, reskilling and R&D; legal regulatory and cybersecurity. Each of these committees had released a draft paper, which can be found here. The Ministry of Commerce and Industries had also formed a task force on AI, which was headed by IIT Madras’ V Kamakoti.

The proposed AI stack

  1. Infrastructure layer
    • Ensures setting up of a common data controller including multi cloud scenarios- private and public
    • Ensures federation, encryption and minimisation at the cloud end
    • Ensures monitoring and data privacy of the data stored.
  2. Storage layer
    • Ensures that the data is properly archived and stored in a fashion for easy access when queried
    • The paper called this as the most important layer in the stack regardless of size and type of data, since value from data can only be derived once it is processed. And data can only be processed efficiently, when it is stored properly.
  3. Compute layer
    • Ensures proper AI & ML analytics
    • Template of data access and processing to ensure open algorithm framework is in place
    • Process ensures Natural Language Processing and decision tree
    • Includes deep learning and neural networks, and predictive and cognitive models
  4. Application layer
    • This layer ensures that the backend services are properly and legitimately programmed
    • Develop proper service framework
    • Ensure proper transaction movement, and that proper logging and management is put in place for auditing if required at any point of time.
  5. Data / information exchange layer
    • Provides for end customer interface
    • Has consent framework for data consent from/to customers
    • Provides various services through secured gateway services
    • Ensures that digital rights are protected and the ethical standards maintained
    • Provides for open API access of the data and has chatbots access, along with various AI/ML Apps.
  6. Security and governance layer (vertical layer)
    • This is a cross cutting layer across all above layers that ensures that AI services are safe, secure, privately protected, trusted and assured.
    • There will be an “overwhelming flow” through the stack, which is why there is a need to ensure encryption at different levels, the paper said.

How the proposed AI stack looks like

Also read: