The European Union is mulling a ban on the use of Artificial Intelligence (AI) for “indiscriminate surveillance”, specifically for building systems that can track individuals in a physical space. The EU is also considering a ban on AI-based systems from assigning credit scores to people, according to a leaked proposal of the EU, which was first reported by Politico.
The EU’s draft proposal — named the “Regulation on a European Approach for Artificial Intelligence” —, in no unclear terms, admits that while AI can be used for a variety of purposes, its harms could not be ignored. “Such harm might be material or immaterial, insofar as it related to the safety and health of persons, their property or other individual fundamental rights and interests protected by Union law,” the draft read.
A legal framework is needed to ensure that the development and uptake of AI meets a high level of protection of public interests, in particular the health, safety and fundamental rights. “This Regulation aims to improve the functioning of the internal market by creating the conditions for an ecosystem of trust regarding the placing on the market, putting into service and use of artificial intelligence in the Union.”
The draft states that a key tenet of the proposed framework is to not focus on the technology as such, but on their use, specifically when they are used as a component in a product or in a standalone form where the output is partially or fully automated. “As a component of a product, an AI system can be physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded).”
Key highlights from the document:
- Ban on indiscriminate surveillance: The use of AI for the purposes of indiscriminate surveillance should be prohibited when applied in a generalised manner to all persons without differentiation, the draft notes.” The methods of surveillance could include monitoring and tracking of natural persons in digital or physical environments, as well as automated aggregation and analysis of personal data from various sources.”
- The caveat: These AI practices, however, will be allowed when carried out by public authorities to safeguard public security. They would be subject to “appropriate safeguards for the rights and freedoms of third parties”.
- No algorithmic social credit scoring: The use of AI for assigning algorithmic social scores of people should not allowed if carried out in a generalised manner when the person’s score is based on their behaviour in multiple contexts and personality characteristics, ultimately causing their detrimental treatment.” Detrimental treatment could occur for instance by taking decisions that can adversely affect and restrict the fundamental rights and freedoms of natural persons, including in the digital environment,” it notes.
- The caveat: The use of such AI tools for assigning social scores is allowed as long as it is done for a “specific legitimate purpose of evaluation and classification”.
- 4% fine on annual turnover: Infringement of the regulations can result in fines amounting to 4% of the annual turnover of a company in the previous financial year.
- Risk-based approach: The regulation will follow a risk-based approach. This implies that certain uses of AI will be prohibited outright. Other uses will require obligations from service providers, wherein compliance will be verified through ex-ante and ex-post enforcement tools. Furthermore, other uses will see the establishment of limited transparency obligations.
- Normative standards for high-risk systems: The EU will establish normative standards for all high-risk AI systems such as those used in self-driving cars, marine equipment, aviation and railways. Such high-risk systems will be put through a test before they can be put to use. Regulators will ensure that the systems are explicable to human overseers, and that the data used to train is of “high quality”.
- Remote biometric systems are high-risk: The draft calls for remote biometric identification systems to be categorised as a standalone high-risk system. The use of such systems in publicly accessible systems has been a “significant public concern” as they might lead to adverse implications for personal safety, it notes. “These AI systems should be subject to stricter conformity assessment procedures through the involvement of a notified body.” Such systems will also need special authorisation before use.
- In education and vocational training: The use of AI systems that determine access to educational and vocational training, or evaluate persons on tests should be considered high-risk. This is because these systems can determine the educational and professional course of the persons’ lives and livelihood.
- In workers’ management: The use of AI in recruitment, task allocation or evaluation of workers could impact a worker’s future career prospects and livelihood, and hence such a system should be considered high-risk, the draft notes.
- To determine creditworthiness: AI systems used to evaluate creditworthiness of persons should be classified as high-risk systems as they can determine a person’s access to financial resources, and therefore affect the course of their lives. “AI systems used for this purpose may also perpetuate historical patterns of discrimination in consumer finance, for example against persons of certain ethnic or racial origins or create new forms of discrimination.”
- In social security: The use of AI to identify and evaluate beneficiaries of social welfare schemes can have a significant impact on a person’s livelihood, and may infringe their right to human dignity. Hence, such systems should be considered high-risk, the draft notes.
- In law enforcement: The use of AI by law enforcement and similar public authorities, such in the dispatch of officers, evaluation of asylum and visa seekers and so on need to be considered high-risk. Such systems also presumably include those used for predicting crimes.
- In judiciary: Any AI systems used to assist judges in court, unless for ancillary tasks, should be considered high-risk.
- Prohibit manipulative AI: The draft notes that certain AI practices have significant potential to manipulate people and exploit their vulnerabilities. “Manipulative artificial intelligence practices should be prohibited when they cause a person to behave, form an opinion or take a decision to their detriment that they would not have taken otherwise.”
- Notifications to people using AI systems: People are supposed to be notified when interacting with a high-risk AI system, “unless this is obvious from the circumstances and the context of use”.
- Applicable to all companies serving EU citizens: The draft notes that the regulation will apply to providers of AI systems irrespective of whether they are established within the EU, as long as the systems affect EU citizens.