On May 11, a draft negotiating mandate on rules for the production and deployment of high-risk AI, and limits to the usage of surveillance systems, was adopted by the European Parliament (EU). These rules are a part of the AI Act, which was proposed by the European Commission in April 2021. The AI Act is a risk-based regulatory approach that applies to all key stakeholders in the AI space. The current amendments to it focus on the prohibition of the use of biometric data from social media or CCTV footage to train facial recognition AI and other AI systems that are used in criminal profiling by law enforcement authorities. The draft received 84 votes in its favor.
This draft negotiating mandate needs to be endorsed by the whole Parliament before work on the final form of the law starts being formulated. Voting on the matter is expected to occur during the 12-15 June session.
FREE READ of the day by MediaNama: Click here to sign-up for our free-read of the day newsletter delivered daily before 9 AM in your inbox.
Why it matters:
Given the recent explosion of interest in generative AI and the risks associated with it, this AI Act might be a necessary first step in defining the rules and regulations around ethical AI development. While the EU’s data protection rules have historically been a source of inspiration for other countries, these amendments are particularly important because they focus on issues of discrimination and mass surveillance and the risks they pose to human rights to dignity and privacy.
But just because these amendments attempt to offer stringent protection of human rights, doesn’t mean there aren’t ways to bypass the AI Act. At their present stage, the AI Act doesn’t forbid the creation of high-risk AI systems that are meant solely for export. This, according to Amnesty International, could make the EU complicit in human rights abuses outside its borders.
Other important amendments:
- AI systems that categorize natural persons based on characteristics like gender identity, race, ethnic origin, etc., will be prohibited. EU asserts that such categorizations are intrusive, violate human dignity, and hold great risk of discrimination. It has also prohibited the use of AI systems to make predictions, profiles, or risk assessments based on personality traits and characteristics, including the person’s location, or past criminal behavior.
- AI systems aiming to detect emotions and physical or physiological features have been prohibited. The EU states that such systems lack reliability and that expressions of emotions and the perception of said emotions vary considerably across cultures and situations. This makes the systems susceptible to abuse.
- The amendments have expanded on the definition of high-risk AI (under Article 6 of the act). Earlier, AI systems that significantly harm the safety, health, or fundamental rights of people in the EU were classified as high-risk. Now, AI systems that pose a significant risk to the environment shall also be considered high-risk. The amendment allows AI system providers who do not think that their AI should be classified as high-risk to submit a reasoned notification to the National Supervisory Authority that they are not subject to the requirements.
- The EU also prohibits indiscriminate and untargeted scraping of biometric data from social media or CCTV footage for the purpose of creating or expanding on facial recognition databases. It mentions that such practices add to the feeling of mass surveillance and can lead to a violation of the right to privacy.
- The amendments emphasized that AI providers lack control over the development of foundational models. As such the EU believes that foundational models should be subject to more specific requirements by the AI act. It says that foundation models should assess and mitigate possible risks and harms through appropriate design, testing, and analysis. They should also implement data governance measures including assessment of biases.
Statements from key stakeholders:
Digital rights organizations Access Now and European Digital Rights (EDRi) came out with statements regarding the amendments. While both groups welcome a lot of the amendments they encourage the EU to remove the additional layer of provision for AI system providers added to Article 6. They believe that this allows AI systems a loophole to get out of the high-risk category and leaves room for legal uncertainty and risks undermining the AI Act. Both organizations also hold the opinion that the EU could do more to protect the interests of migrants. They suggest that the EU should ban automated risk assessment and predictive analytics systems in migration procedures to prevent the illegal pushback of migrants.
This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.
- Why The EU Wants To Regulate Artificial Intelligence Through A ‘Risk-Based’ Approach
- European Union To Introduce New Copyright Rules For Generative AI Tools In Its AI Act
- European Union To Introduce New Copyright Rules For Generative AI Tools In Its AI Act
