“One of the questions we often ask ourselves, how do we know whether the AI is trustworthy or not?” said one research scientist while the discussing ethical and societal implications of AI.
Bias in artificial intelligence (AI) algorithms is a reality. It was first highlighted through research done by the Massachusetts Institute of Technology (MIT) which examined facial analysis software and found that the algorithm threw up more errors for dark-skinned women when compared to light-skinned men.
However, despite the cautionary nature of this study and many more studies published later, the usage of AI has increased exponentially in the world. In India, the spike in deployment of AI and facial recognition surveillance systems are going unchecked due to the absence of data protection laws. MediaNama recently highlighted how facial recognition and AI-based video analytics will be a key aspect for detecting “suspects” as part of the Lucknow Safe City Project.
Authorities in India finally seem to be waking up to the necessity of having ‘ethics’ in the usage of AI. Recently, the National Telecommunications Institute for Policy Research, Innovation and Training (NTIPRIT) in the Department of Telecommunications (under the Ministry of Communications) organised a webinar on “AI and Ethical Issues”. While the usage of AI is becoming more extensive across government verticals like agriculture, industry, education, and so on, questions arise on why the DoT is hosting a webinar on AI and ethics, and if the topic wouldn’t be better suited for the Ministry of Electronics and Information Technology (MeitY) – which is responsible for IT-related developments in the country.
In the webinar, Jesmin Jahan Tithi, Research Scientist (AI/ML) at Intel proposed the usage of algorithm auditing mechanisms to identify bias and malpractices in AI algorithms. She along with a multitude of researchers developed “Z-Inspection”, an inspection process for AI which can be applied to domains such as business, healthcare, public sector, etc.
“There is a growing concern on the ethical and societal implication of the AI. One of the questions we often ask ourselves, how do we know whether the AI is trustworthy or not? The more basic question is, what is the basis of trust. Although the definition of trust can vary based on domain, we can use the definition given by the European Union High Level Expert Group, and according to them, an AI is trustworthy if it is lawful, respecting all applicable laws and regulations, if it is robust (both from technical and social perspective) and if it ethical (respecting all ethical values).”— Jesmin Jahan Tithi.
“You might ask, why do we need an inspection or auditing process. Although self assessment is always welcome, the problem with it is that, there often may be conflicts of interest. Individual or the group that is building the AI might not see the negative impact of the innovation and thus may need a third party to do the inspection. Also, evolution of ethical impact is challenging because evaluating such impact involves gauging the impact of the AI on the person, but also those who are affected indirectly such as, their friends, families and so on,” she added while justifying the development of the AI auditing process Z-Inspection.
How does it work?
First, a protocol log of the process is created on the software. It contains information such as —
- Information on the team of experts
- Actions performed as part of each investigation
- Steps done in data preparation and analyses
- Steps to perform use case evaluation with tools
Tithi said that the protocol can be shared with relevant stakeholders at any time to ensure transparency of the process.
Second, Tithi and her team define a catalogue of questions to clarify the expectations between the stakeholders. They are
- Who requested the inspection?
- For whom is the inspection relevant?
- Is it recommended or required?
- What are the conditions that need to be analysed?
- How to use the results of the inspection?
Third, the time frame of the assessment is decided. Tithi and other researchers have come up with three different time-scales based on —
- Present challenges: The risks which exist today
- Near future challenges: Risks in the near future with existing technology
- Long-run challenges: Risk and challenges far in the future as technology becomes more advanced
Identifying ethical issues and tensions
Following that, socio-technical scenarios are created and analysed by the team in order to provide —
- The aim of the AI systems
- The actors involved in the AI, their expectations, and interactions
- The processes where the AI systems are used
“The basic idea is to analyze the AI system by using socio-technical scenario with relevant stakeholders, including designers (when available), domain, technical, legal, and ethics experts . In Z-Inspection, the scenarios are used by a team of inspectors, to identify a list of potential ethical, technical, and legal issues that need to be further deliberated,” Tithi said.
Next, the stakeholders identify the ethical issues, tensions, and flags. Tithi described an ethical issue or tension as “tensions between the pursuit of different values in technological applications rather than an abstract tension between the values themselves”. A flag is an issue that needs further assessment.
“This is done by a selected number of members of the inspection team, with interdisciplinary skills, e.g., experts in ethics, philosophy, policy, law, domain experts, and ML. Such a variety of backgrounds is necessary to identify all aspects of the ethical implications of the use of the AI,” a paper by Z-Inspection said.
These are the classification of ethical tensions —
- True dilemma: A conflict between two or more duties, obligations or values, both of which an agent would ordinarily have reason to pursue but cannot.
- Dilemma in practice: Tension exists not inherently but due to current technological capabilities and constraints, including the time and resources available for finding a solution.
- False dilemmas: Situations where there exists a third set of options beyond having to choose between two important values.
Tithi added two more requirements to this, namely —
- Avoiding concentration of power
- Assessing if ecosystems respect values of Western European democracy
Case study: Usage on AI for predicting cardiovascular risk
During the webinar, the Intel researcher said that Z-Inspection was used to evaluate a non-invasive AI medical device designed to assist doctors in the diagnosis of cardiovascular diseases (CVD).
“CVDs are the number one cause of death globally, taking an estimated 17.9 million lives each year. Over the past decade, several ML techniques have been used for CVD diagnosis and prediction. The potential of AI in cardiovascular medicine is high; however, ignorance of the challenges may overshadow its potential clinical impact,” a paper on Z-Inspection said.
Tithi said that the product they assessed was a noninvasive AI medical device of Class 1 that uses machine learning to analyse sensor data (i.e., electrical signals of the heart) to predict the risk of cardiovascular heart disease. The company uses a traditional ML pipeline approach, which transforms raw data into features that better represent the predictive task, she said.
The machine gives an output visualized as “Red” for prediction of a risk of a CVD and “Green” for prediction of absence of a CVD. Later, the company based on customer feedback added “yellow” to indicate a generic, nonspecified CVD issue.
The problems that Z-Inspection identified: “When a patient is faced by only a green or red result, ambiguities surge. What level of risk does red actually imply? Why? When it comes to living, is it sometimes better to not know? Does the addition of a yellow, intermediary score—which the company did add—resolve any of these questions and dilemmas, or just make them worse? Because of the geography, it seemed possible that some races may have been over- or under-represented in the training data. Should patients from the overrepresented race(s) wait to use the technology until other races have been verified as fully included in the training data?,” she asked.
- Consider continuously evaluated metrics with automated alerts.
- Consider a formal clinical trial design to assess patient outcomes.
- Consider periodically collected feedback from clinicians and patients.
- Consider establishing an evaluation protocol that is clearly explained to users
Others in the webinar included UK Shrivastava, Sr DDG and head of NTIPRIT, C Srinivas, DDG (ICT), NTIPRIT, and Jayaram Beladekere, technical product management function in Ericsson Global AI Accelerator in India. While Beladekere talked about Ericsson was using AI in its various processes including network connectivity, NTIPRIT speakers talked about AI and ethics in general.
- Event Report: Impact of Data Policies on Artificial Intelligence
- Hyderabad Deploys Artificial Intelligence tools across 2,000 CCTVs to identify mask violators
Have something to add? Post your comment and gift someone a MediaNama subscription