wordpress blog stats
Connect with us

Hi, what are you looking for?

Summary: Framework for Fairness Assessment of Artificial Intelligence/ Machine Learning Systems

Stakeholders have been asked to fill in the blanks on black-box testing, fairness rating, handling code, training datasets, etc.

The Telecommunication Engineering Centre (TEC) is conducting a fairness assessment study of Artificial Intelligence/ Machine Learning systems, according to a document released by the agency working under the Department of Telecommunications (DoT).

TEC said that AI/ML systems mark a presence in all aspects of our lives today and there is a need to build public trust in them. Researchers worldwide have found unintended biases in these systems which goes against the expectations of users, it added.

“Bias in AI/ML Systems raises various ethical, social and legal issues. When AI/ML Systems are used for e-governance or by the judiciary, checking for their fairness would become a legal requirement. Therefore, one important requirement of responsible AI is that the AI/ML systems should be unbiased or fair.” — TEC

The agency wrote that their assessment will act as a benchmark for fairness which will help researchers, college students, and government organisations. It added that start-ups, MSMEs, and large enterprises will also benefit from this exercise as their products will be more credible and acceptable if they are assessed and certified by a neutral government agency.

AI/ML applications have become ubiquitous in recent years as the technology gets adopted across various domains such as healthcare, smart homes, finance, and defence, among others. A benchmark will go a long way in helping people make an informed decision. It will also bring parity in an industry which remains unregulated in India.

TEC is inviting suggestions from stakeholders for framing procedures for assessing fairness of different types of AI/ML Systems. The feedback can be sent at avinash.70@gov.in and adic1.tec@gov.in till March 8, 2022.

Advertisement. Scroll to continue reading.

What is the proposed framework for fairness assessment of AI/ML systems?

Types of AI/ML Systems: TEC wrote that there can be different ways to classify AI/ML systems based on ML algorithms used by companies. The body is asking stakeholders if classification based on machine learning algorithms used is suitable for the assessment. The classification can range from:

  • supervised,
  • semi-supervised,
  • unsupervised,
  • reinforcement learning systems.

Supervised learning systems: “Supervised learning systems learn from labelled datasets and are used to classify data or predict outcomes of unforeseen data accurately,” the document read. They are deployed for various purposes like predictions and forecasting, face detection, handwriting/ signature recognition, etc. They can also be applied to identify fraudulent benefits claims, CCTV surveillance, social media sentiment analysis, spam detection, weather forecasting, and stock price predictions. TEC wants to ascertain whether a single assessment procedure is sufficient for assessing fairness of various supervised learning systems.

  • Looking for possible biases: The government body enlisted some of the biases to include selection bias, measurement bias, recall bias, observer bias, exclusion bias. “The biases could be due to datasets used for training, algorithms and/ or usage of the system,” read the document. It has asked the stakeholders to further identify types of unintended biases that should be taken into account while assessing fairness.

Establishing fairness parameters: TEC wrote that the metrics provide a mathematical definition of fairness. Some of the metrics detailed are:

  • demographic parity
  • equal opportunity
  • equal mis-opportunity
  • average odds

Stakeholders have been asked to provide parameters appropriate for measuring bias of supervised learning AI/ ML systems.

Standardised procedure for fairness assessment: TEC was of the opinion that there was a need to standardise the assessment process. “The standardised process could be used by the developers for sell-assessment as well as by third- party auditors for assessment and certification,” the document read. TEC wants to know what should be included in the procedure for a fairness rating.

White-box vs Black-box testing: White-box testing entails the assessing agency accessing the training datasets and the implementation details of the AI/ML system. Black-box testing will cover situations when training datasets and implementation details are not available with the assessing agency. The stakeholders should prescribe if any separate procedure is required for black-box testing.

Procedures for handling code by the assessing agency: It is important that standard procedures are in place for safe handling of sensitive material such as code, datasets, etc. when the assessment is to be carried out by a third-party,” the document stated. It has called for the best practices for handling code, training datasets, etc. from stakeholders.

What about semi-supervised, unsupervised, and reinforcement learning systems?

The unsupervised learning systems use unlabelled data to identify patterns whereas semi- supervised learning systems are a mix of the supervised and the unsupervised learning systems.

“Reinforcement learning generally learns from new situations using a trial-and-error method,” the document explained.

The opinion of the stakeholders has been sought whether the fairness assessment procedure framed for supervised learning systems will work for assessing other types of AI/ML systems or if there is a requirement for different procedures.

Advertisement. Scroll to continue reading.

Also Read:

Have something to add? Subscribe to MediaNama here and post your comment. 

Written By

I cover several beats such as Crypto, Telecom, and OTT at MediaNama. I can be found loitering at my local theatre when I am off work consuming movies by the dozen.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

Amazon announced that it will integrate its logistics network and SmartCommerce services with the Open Network for Digital Commerce (ONDC).

News

India's smartphone operating system BharOS has received much buzz in the media lately, but does it really merit this attention?

News

After using the Mapples app as his default navigation app for a week, Sarvesh draws a comparison between Google Maps and Mapples

News

In the case of the ‘deemed consent' provision in the draft data protection law, brevity comes at the cost of clarity and user protection

News

The regulatory ambivalence around an instrument so essential to facilitate data exchange – the CM framework – is disconcerting for several reasons.

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ