“We [as a society] clearly want to have more AI applications to solve serious societal problems, such as the pandemic, but we are not ready to respond to the risks that will come with these applications,” said Katarzyna Szymielewicz, a lawyer and activist on human rights and technology, said at the Internet Governance Forum 2020. She is also the cofounder and president of the Panoptykon Foundation in Poland. We already have data protection safeguards such as the GDPR (General Data Protection Regulation); we could rely on Article 22 of the GDPR that gives individuals the right to have automated decisions that affect them in a significant way explained, she said. She explained that GDPR can be a source of safeguards for individuals affected by a AI-based healthcare application. However, this standard does not solve other problems that will come with the use of AI in high-risk sectors such as health. It does not deal with use of personal data that may not have significant individual impact, but can impact the society as a whole, she said. “If we think about errors, if we think about simply waste of public money or getting predictions wrong or getting public policy wrong, these types of results are extremely problematic even though they might not affect a specific individual or entail the use of personal data. This is why we represent this position that the EU needs to create a new legal framework for AI that goes beyond the use of personal data and…
