“We [as a society] clearly want to have more AI applications to solve serious societal problems, such as the pandemic, but we are not ready to respond to the risks that will come with these applications,” said Katarzyna Szymielewicz, a lawyer and activist on human rights and technology, said at the Internet Governance Forum 2020. She is also the cofounder and president of the Panoptykon Foundation in Poland.
We already have data protection safeguards such as the GDPR (General Data Protection Regulation); we could rely on Article 22 of the GDPR that gives individuals the right to have automated decisions that affect them in a significant way explained, she said. She explained that GDPR can be a source of safeguards for individuals affected by a AI-based healthcare application. However, this standard does not solve other problems that will come with the use of AI in high-risk sectors such as health. It does not deal with use of personal data that may not have significant individual impact, but can impact the society as a whole, she said.
“If we think about errors, if we think about simply waste of public money or getting predictions wrong or getting public policy wrong, these types of results are extremely problematic even though they might not affect a specific individual or entail the use of personal data. This is why we represent this position that the EU needs to create a new legal framework for AI that goes beyond the use of personal data and goes beyond individual protection,” Szymielewicz said. While this is already one of the priorities of the current European Commission, she added, a caveat is that ethics is not enough: we have to arrive at strictly more rules.
In her opinion, while there is a need to introduce obligatory human rights impact assessments for both public and private systems, the higher standard should apply to public applications and AI systems should not be deployed without thorough, detailed, public evidence‑based human rights impact assessment. This, in itself, increases transparency, increases explainability, prevents certain risks like the use of non‑adequate data for training, Szymielewicz added.
However, even the best human rights impact assessments will not minimize the risk of simply applying AI to the context where risks are too high. We will need to arrive at some redlines. We could consider redlines regarding whether an AI system should be allowed if the human rights assessment is unsatisfactory. And even if it is, whether it should not be allowed if one cannot explain the functioning of the system to a level that allows for independent auditing, especially in high‑risk areas. Another redline to consider could be that the goals of the AI system cannot violate the essence of fundamental rights and human dignity. — Katarzyna Szymielewicz, lawyer and activist on human rights and technology [emphasis added]
Issues arising from use of AI in Healthcare
According to Martha Stickings from the European Union Agency for Fundamental Rights, the main issues that arise in use of AI in healthcare are:
- Lack of clarity about the definition of AI, since what it means can vary with context. This poses a challenge in context of any regulation in respect of the principles of legal clarity and foreseeability.
- Confusion of application of existing law such as around standards on data protection and discrimination and how that applies to the use of AI. Tied to this are questions of whether we should take a horizontal or sectoral approach to regulation.
- Concerns around data quality such as what data is being used to train AI systems, where it comes from, how representative of a particular population it is, how it is input into a tool – these issues are particularly emerging in healthcare
- Effectiveness of tools to mitigate fundamental rights violations and enforce existing rules, and awareness of fundamental rights issues amongst the full range of interested parties whether that be developers, users, or others.
Data colonialism and challenges in low, middle income countries
Though AI is proving to be quite a challenge for regulatory agencies in rich countries, there are concerns that the regulatory agencies in lower and middle‑income countries may not have the capacity or expertise to assess novel AI technologies adequately and to ensure that potential systematic errors do not affect the diagnosis, surveillance, and treatment, according to Andreas Reis of the World Health Organisation.
- AI deployed without data protection law, impact assessment, ethical principles: Additionally, AI technologies are piloted in low and middle income countries before going to the market in more regulated environments. Such technologies could be introduced into countries without up-to-date data protection and confidentiality laws, especially for health data. Concerns have been raised where data from low and middle income countries could be used for commercial purposes without due regard for ethical principles and human rights norms, such as consent, privacy, and autonomy.
- Companies based in countries with very strict and developed regulatory frameworks and data protection laws could actually expand data collection to low and middle income countries without providing products and services back to these underserved communities in countries. AI technologies may be introduced without adequate impact assessment in terms of human rights prior to the deployment, and that technologies that are not adapted to the existing context, such as diverse languages and scripts within the countries, can mean that certain applications may not operate correctly or at all.
- Further, challenges around data and data security is more so in lower and middle‑income countries. Absence of data and poor data quality could actually distort an algorithm’s performance. Usage of poor datasets need significant investments to even make them usable. For instance, a tool was developed to diagnose skin cancer and melanoma. The data fed into the algorithm was exclusively from white people, the result being that the tool was not able to detect melanoma from black people, said Reis. Something like this restricts access of this tool to Africa, for instance.
Proxy data in healthcare
“One of the challenges with proxies is that even if you don’t measure [proxy data], you can learn it and then the system could learn it,” said Olivier Smith from Kao Health. Proxy data in the context of AI is stand-in or historical data — rather than new data — that is used to train the algorithm.
Smith spoke of an analysis of mood and stress they had done, in which the AI system learned there were two different groups in the training data: men and women. While that by itself is okay, he said, if that dataset is being applied to something else, you need to be aware this system could recommend an activity that’s more associated with women than something more associated with men. You might say actually that the men can go and play football and the women can go and do something else, just pick some awfully sexist activity, he said.
Proxy data is a complex political problem, as well as a legal problem that needs to be solved. “If we want to protect the use of proxy data, and simply make sure that people are not discriminated based on proxy data, we probably need to force system designers to reveal correlations that are not always tracked by the system itself,” according to Katarzyna Szymielewicz.
In order to interrogate which factors had impact on individual decision in question and to what extent the data are sensitive, we probably need to collect even more data and we need to force the AI system in question to identify correlations that otherwise wouldn’t even be identified. That leads to more data processing, that leads to more exposure, but if we want to be sure that high‑risk applications of AI like in the places of health are not discriminatory, probably there is no other way to do it, Szymielewicz said.