wordpress blog stats
Connect with us

Hi, what are you looking for?

Taming the AI Beast, the EU Way: EU AI Regulations

By Preethika Pilinja 

While lawmakers throughout the world are juggling with various forms of  soft laws to deal with AI, the European Union has released a first-of-its kind proposal to regulate AI. These regulations made more noise outside the EU than inside it, for not only its possible extraterritorial effect, but also its potential to influence governments in other countries to enact similar laws on AI.

 The proposal for a regulation laying down harmonised rules on artificial intelligence, the Artificial Intelligence Act, prohibits certain uses of AI, lays down prerequisites for High Risk AI systems and low-risk AI systems. It follows a risk-based approach to legislation, wherein, different requirements apply depending on the risk profile of the relevant AI system.

Prohibited AI Practices

The Regulation prohibits AI-based social scoring for general purposes done by public authorities. This is in stark contrast with China where use of social scoring is notoriously allowed. Further, there is also prohibition on use of ‘real-time’ remote biometric identification systems such as facial recognition systems in publicly accessible spaces for the purpose of law enforcement. However, this ban has not augured well with civil right groups for multiple reasons: 

  • One, the ban is only on use of real-time biometric identification systems, which means there are no restrictions on access or use of previously accumulated databases of pictures and videos. 
  • Second, ban on real time use also has a number of vague exceptions like prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons. These vague exceptions  further dilute the rigour of the prohibition.

Above all, both the prohibitions have left the private companies untouched. Surprisingly the ban on social scoring and real-time biometric identification systems are only for public authorities. So, if a private insurance company decides to use a social scoring system to decide your insurance premium, that escapes the rigour of prohibition under these Regulations.

Practices that manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children in a manner that is likely to cause harm to them are also prohibited. However, what can constitute practices that exploits vulnerabilities is quite subjective. For example, does this provision prohibit targeted ads? If so, whether it will prohibit all types of targeted ads or would there be exceptions, are unanswered questions.

Regulation of High-Risk AI Systems

The main focus of the Regulation seems to be identification and regulation of high-risk AI systems. The proposed law identifies several high risk AI systems, such as  AI systems used in critical infrastructure, determining access to education, employment and essential private and public services, law enforcement, migration, asylum and border control management, and administration of justice and democratic processes. This is a fairly well-compiled list covering most of the areas where deployment of AI needs to be monitored carefully.

Further, where AI is a safety component of a product or product itself that is subject to one of the listed EU Regulations like Machinery regulation, safety of toys, civil aviation etc., then it is also a high-risk AI system. While some European industrial bodies have welcomed the draft and sought more clarifications, VDMA (Verband Deutscher Maschinen- und Anlagenbau), one of the largest industrial associations of Europe representing more than 3000 small and medium sized industries has stated that double regulation of machineries using AI might discourage them from using AI altogether.

High-risk AI systems are permitted in the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment. There are requirements for maintaining adequate risk and quality management systems, technical documentation, and logs of operation. There are also specifications with respect to information that needs to be given to the users relating to characteristics, capabilities, human oversight details with respect to high-risk AI systems.

Stipulations are also laid down  with respect to quality of data to be used for training the data sets used in high-risk AI systems.The requirements given in the Regulations are subjective and often idealistic. For example, one of the requirements states that, training, validation and testing data sets shall be relevant, representative, free of errors and complete. While we all agree that it is a valid, well-intentioned requirement, how do we evaluate such requirements is not clear. For an industry player, this can be a regulatory nightmare, unless the requirements are presented in a compliance-friendly manner. We can only hope that the European Commission will throw more light on these requirements. 

Penalties for Violation

Infringements as per the Regulations shall be subject to administrative fines of up to 30 million euros or, if the offender is a company, up to 6% of its total worldwide annual turnover for the preceding financial year. This is much more than penalties provided under GDPR which is 20 million euros or 4% of world turnover whichever is higher. However, unlike the GDPR, there is no clear redressal mechanism given in the AI Regulations.The proposed enforcement mechanism which includes the EU Artificial Intelligence Board and the appointment of national supervisory authorities lack clarity on their powers and functions at this stage.

Conclusion

EU AI Regulations is a well-intentioned legislation aimed at protecting human rights and fundamental freedom while fostering innovation. We are compelled to agree that it is a tough balance to maintain especially given the vagaries of AI. Further, being a first-of -its-kind legislation there are not many examples to fall back on. At the same time, the shortcomings of the draft cannot be brushed under the carpet.

From an implementation perspective, the vagueness in the mandatory requirements for high-risk AI can be a regulatory nightmare for  industry players. EU Commission has called for feedback on its draft. It can be only hoped that, at least some of these concerns, if not all will be addressed.

 In any case, while we do not know when these Regulations will take effect, EU deserves credit for taking a bold step in trying to tame an otherwise unruly beast. Whether AI Regulations will be a stand-alone regulation or like the GDPR encourage other lawmakers to enact concrete laws for dealing with AI, only time will tell.

*

Preethika Pilinja is a Bangalore-based legal counsel and a postgraduate in business laws from National Law School of India University, Bengaluru.  She is also Co-Lead, Team India, ForHumanity. Views are personal.

Written By

Free Reads

News

Telecom companies are against a regulatory sandbox, as they think information revealed by businesses during the sandboxing process might be confidential should be out...

News

According to a statement, the executive body of the European Union had also sought internal documents on the risk assessments and mitigation measures for...

News

The newly launched partially open-sourced LLM Grok-1 can be commercially used but not trademarked.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

NPCI CEO Dilip Asbe recently said that what is not written in regulations is a no-go for fintech entities. But following this advice could...

News

Notably, Indus Appstore will allow app developers to use third-party billing systems for in-app billing without having to pay any commission to Indus, a...

News

The existing commission-based model, which companies like Uber and Ola have used for a long time and still stick to, has received criticism from...

News

Factors like Indus not charging developers any commission for in-app payments and antitrust orders issued by India's competition regulator against Google could contribute to...

News

Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ