wordpress blog stats
Connect with us

Hi, what are you looking for?

UN Internet Governance Forum: Risks and regulation of AI in healthcare

“We [as a society] clearly want to have more AI applications to solve serious societal problems, such as the pandemic, but we are not ready to respond to the risks that will come with these applications,” said Katarzyna Szymielewicz, a lawyer and activist on human rights and technology, said at the Internet Governance Forum 2020. She is also the cofounder and  president of the Panoptykon Foundation in Poland. 

We already have data protection safeguards such as the GDPR (General Data Protection Regulation); we could rely on Article 22 of the GDPR that gives individuals the right to have automated decisions that affect them in a significant way explained, she said. She explained that GDPR can be a source of safeguards for individuals affected by a AI-based healthcare application. However, this standard does not solve other problems that will come with the use of AI in high-risk sectors such as health. It does not deal with use of personal data that may not have significant individual impact, but can impact the society as a whole, she said. 

“If we think about errors, if we think about simply waste of public money or getting predictions wrong or getting public policy wrong, these types of results are extremely problematic even though they might not affect a specific individual or entail the use of personal data.  This is why we represent this position that the EU needs to create a new legal framework for AI that goes beyond the use of personal data and goes beyond individual protection,” Szymielewicz said. While this is already one of the priorities of the current European Commission, she added, a caveat is that ethics is not enough: we have to arrive at strictly more rules.

In her opinion, while there is a need to introduce obligatory human rights impact assessments for both public and private systems, the higher standard should apply to public applications and AI systems should not be deployed without thorough, detailed, public evidence‑based human rights impact assessment. This, in itself, increases transparency, increases explainability, prevents certain risks like the use of non‑adequate data for training, Szymielewicz added. 

However, even the best human rights impact assessments will not minimize the risk of simply applying AI to the context where risks are too high. We will need to arrive at some redlines. We could consider redlines regarding whether an AI system should be allowed if the human rights assessment is unsatisfactory. And even if it is, whether it should not be allowed if one cannot explain the functioning of the system to a level that allows for independent auditing, especially in high‑risk areas. Another redline to consider could be that the goals of the AI system cannot violate the essence of fundamental rights and human dignity. — Katarzyna Szymielewicz, lawyer and activist on human rights and technology [emphasis added]

Issues arising from use of AI in Healthcare

According to Martha Stickings from the European Union Agency for Fundamental Rights, the main issues that arise in use of AI in healthcare are:

  • Lack of clarity about the definition of AI, since what it means can vary with context. This poses a challenge in context of any regulation in respect of the principles of legal clarity and foreseeability. 
  • Confusion of application of existing law such as around standards on data protection and discrimination and how that applies to the use of AI. Tied to this are questions of whether we should take a horizontal or sectoral approach to  regulation. 
  • Concerns around data quality such as what data is being used to train AI systems, where it comes from, how representative of a particular population it is, how it is input into a tool – these issues are particularly emerging in healthcare
  • Effectiveness of tools to mitigate fundamental rights violations and enforce existing rules, and awareness of fundamental rights issues amongst the full range of interested parties whether that be developers, users, or others.

Data colonialism and challenges in low, middle income countries

Though AI is proving to be quite a challenge for regulatory agencies in rich countries, there are concerns that the regulatory agencies in lower and middle‑income countries may not have the capacity or expertise to assess novel AI technologies adequately and to ensure that potential systematic errors do not affect the diagnosis, surveillance, and treatment, according to Andreas Reis of the World Health Organisation. 

  • AI deployed without data protection law, impact assessment, ethical principles: Additionally, AI technologies are piloted in low and middle income countries before going to the market in more regulated environments. Such technologies could be introduced into countries without up-to-date data protection and confidentiality laws, especially for health data. Concerns have been raised where data from low and middle income countries could be used for commercial purposes without due regard for ethical principles and human rights norms, such as consent, privacy, and autonomy. 
  • Companies based in countries with very strict and developed regulatory frameworks and data protection laws could actually expand data collection to low and middle income countries without providing products and services back to these underserved communities in countries. AI technologies may be introduced without adequate impact assessment in terms of human rights prior to the deployment, and that technologies that are not adapted to the existing context, such as diverse languages and scripts within the countries, can mean that certain applications may not operate correctly or at all.
  • Further, challenges around data and data security is more so in lower and middle‑income countries. Absence of data and poor data quality could actually distort an algorithm’s performance. Usage of poor datasets need significant investments to even make them usable. For instance, a tool was developed to diagnose skin cancer and melanoma. The data fed into the algorithm was exclusively from white people, the result being that the tool was not able to detect melanoma from black people, said Reis. Something like this restricts access of this tool to Africa, for instance.

Proxy data in healthcare

“One of the challenges with proxies is that even if you don’t measure [proxy data], you can learn it and then the system could learn it,” said Olivier Smith from Kao Health. Proxy data in the context of AI is stand-in or historical data — rather than new data — that is used to train the algorithm.

Smith spoke of an analysis of mood and stress they had done, in which the AI system learned there were two different groups in the training data: men and women. While that by itself is okay, he said, if that dataset is being applied to something else, you need to be aware this system could recommend an activity that’s more associated with women than something more associated with men. You might say actually that the men can go and play football and the women can go and do something else, just pick some awfully sexist activity, he said. 

Advertisement. Scroll to continue reading.

Proxy data is a complex political problem, as well as a legal problem that needs to be solved. “If we want to protect the use of proxy data, and simply make sure that people are not discriminated based on proxy data, we probably need to force system designers to reveal correlations that are not always tracked by the system itself,” according to Katarzyna Szymielewicz.

In order to interrogate which factors had impact on individual decision in question and to what extent the data are sensitive, we probably need to collect even more data and we need to force the AI system in question to identify correlations that otherwise wouldn’t even be identified. That leads to more data processing, that leads to more exposure, but if we want to be sure that high‑risk applications of AI like in the places of health are not discriminatory, probably there is no other way to do it, Szymielewicz said.

Written By

I cover health, policy issues such as intermediary liability, data governance, internet shutdowns, and more. Hit me up for tips.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

The DSCI's guidelines are patient-centric and act as a data privacy roadmap for healthcare service providers.

News

In this excerpt from the book, the authors focus on personal data and autocracies. One in particular – Russia.  Autocracies always prioritize information control...

News

By Jai Vipra, Senior Resident Fellow at Vidhi Centre for Legal Policy The use of new technology, including facial recognition technology (FRT) by police...

News

By Stella Joseph, Prakhil Mishra, and Yash Desai The Government of India circulated proposed amendments to the Consumer Protection (E-Commerce) Rules, 2020 (“E-Commerce Rules”) which...

News

By Rahul Rai and Shruti Aji Murali A little less than a year since their release, the Consumer Protection (E-commerce) Rules, 2020 is being amended....

You May Also Like

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ