wordpress blog stats
Connect with us

Hi, what are you looking for?

Amendments to the draft European Union’s AI Act prohibit mass surveillance and criminal profiling

The amendments focus on the prohibition of the use of biometric data from social media or CCTV footage to train AI systems used by law enforcement authorities

On May 11,  a draft negotiating mandate on rules for the production and deployment of high-risk AI, and limits to the usage of surveillance systems, was adopted by the European Parliament (EU). These rules are a part of the AI Act, which was proposed by the European Commission in April 2021. The AI Act is a risk-based regulatory approach that applies to all key stakeholders in the AI space. The current amendments to it focus on the prohibition of the use of biometric data from social media or CCTV footage to train facial recognition AI and other AI systems that are used in criminal profiling by law enforcement authorities. The draft received 84 votes in its favor. 

This draft negotiating mandate needs to be endorsed by the whole Parliament before work on the final form of the law starts being formulated. Voting on the matter is expected to occur during the 12-15 June session. 


FREE READ of the day by MediaNama: Click here to sign-up for our free-read of the day newsletter delivered daily before 9 AM in your inbox.


Why it matters: 

Given the recent explosion of interest in generative AI and the risks associated with it, this AI Act might be a necessary first step in defining the rules and regulations around ethical AI development. While the EU’s data protection rules have historically been a source of inspiration for other countries, these amendments are particularly important because they focus on issues of discrimination and mass surveillance and the risks they pose to human rights to dignity and privacy. 

But just because these amendments attempt to offer stringent protection of human rights, doesn’t mean there aren’t ways to bypass the AI Act. At their present stage, the AI Act doesn’t forbid the creation of high-risk AI systems that are meant solely for export. This, according to Amnesty International, could make the EU complicit in human rights abuses outside its borders.

Advertisement. Scroll to continue reading.

Other important amendments:

  1. AI systems that categorize natural persons based on characteristics like gender identity, race, ethnic origin, etc., will be prohibited.  EU asserts that such categorizations are intrusive, violate human dignity, and hold great risk of discrimination. It has also prohibited the use of AI systems to make predictions, profiles, or risk assessments based on personality traits and characteristics, including the person’s location, or past criminal behavior. 
  2. AI systems aiming to detect emotions and physical or physiological features have been prohibited. The EU states that such systems lack reliability and that expressions of emotions and the perception of said emotions vary considerably across cultures and situations. This makes the systems susceptible to abuse.
  3. The amendments have expanded on the definition of high-risk AI (under Article 6 of the act). Earlier, AI systems that significantly harm the safety, health, or fundamental rights of people in the EU were classified as high-risk. Now, AI systems that pose a significant risk to the environment shall also be considered high-risk. The amendment allows AI system providers who do not think that their AI should be classified as high-risk to submit a reasoned notification to the National Supervisory Authority that they are not subject to the requirements.
  4. The EU also prohibits indiscriminate and untargeted scraping of biometric data from social media or CCTV footage for the purpose of creating or expanding on facial recognition databases. It mentions that such practices add to the feeling of mass surveillance and can lead to a violation of the right to privacy.
  5. The amendments emphasized that AI providers lack control over the development of foundational models. As such the EU believes that foundational models should be subject to more specific requirements by the AI act. It says that foundation models should assess and mitigate possible risks and harms through appropriate design, testing, and analysis. They should also implement data governance measures including assessment of biases.

Statements from key stakeholders:

Digital rights organizations Access Now and European Digital Rights (EDRi) came out with statements regarding the amendments. While both groups welcome a lot of the amendments they encourage the EU to remove the additional layer of provision for AI system providers added to Article 6. They believe that this allows AI systems a loophole to get out of the high-risk category and leaves room for legal uncertainty and risks undermining the AI Act.  Both organizations also hold the opinion that the EU could do more to protect the interests of migrants. They suggest that the EU should ban automated risk assessment and predictive analytics systems in migration procedures to prevent the illegal pushback of migrants.


This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.

Also read:

 

Written By

Free Reads

News

With GPS in every car, India's toll collection is going high-tech.

News

Google is currently undergoing the necessary procedures for the leasehold land, which is presently under the ownership of the Maharashtra Industrial Development Corporation.

News

The pilot project for which is being launched in 19 cities this year, with a plan to launch a full-fledged rollout next year.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

NPCI CEO Dilip Asbe recently said that what is not written in regulations is a no-go for fintech entities. But following this advice could...

News

Notably, Indus Appstore will allow app developers to use third-party billing systems for in-app billing without having to pay any commission to Indus, a...

News

The existing commission-based model, which companies like Uber and Ola have used for a long time and still stick to, has received criticism from...

News

Factors like Indus not charging developers any commission for in-app payments and antitrust orders issued by India's competition regulator against Google could contribute to...

News

Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ