What's
The AI Act Chronicles: Red Lines (part 2)
In our previous blogs, we have explored the foundations and scope of the AI Act. Lastly, we have been discussing the first set of prohibited practices. In this part, we will take a closer look at the remaining banned practices including predictive policing, scraping of pictures, inferring emotions and the use of biometry.
Regarding predictive policing, the AIA includes the following definition of such prohibited systems:
- “AI systems for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics.”
This ban echoes familiar science fiction scenarios that we may recognise from books or movies such as Minority Report. As the EU law is founded on presumption of innocence, individuals should never be evaluated based solely on AI-predicted behavior derived from their profiling, personality traits, or characteristics such as nationality, birthplace, residence, number of children, debt level, or even the car they drive. This prohibition does not apply to AI systems that assist in human evaluations of an individual’s involvement in criminal activity, provided these assessments are already grounded in objective, verifiable facts directly related to the crime.
Inferring emotions by an AI systems is also prohibited, the AIA bans:
- “AI systems to infer emotions of a natural person in the areas of workplace and education institutions.”
Emotion recognition systems are subject to a fierce debate around their regulation and prohibitions. Concerns surrounding this technology mainly osscilate around risks to human dignity and discrimination. The term emotion recognition is legally defined in the AI Act as “ identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.” Literature, in general, defines them as various technologies that aim to deduce a person’s emotional state using data gathered about them. This can involve interpreting emotions based on facial expressions or configurations, voice patterns, detailed data from wearable devices, or even neurological information from brain-computer interfaces. Thus, the legal definition is narrower than the interpretation of the notion in practice. It is important to highlight that the prohibition is applicable only in areas of education and workplace. The prohibition does not apply to cases where the use of the AI system is intended for medical or safety reasons e.g. for therapeutic use.
Two final prohibitions and one restriction are related to biometry. Firstly, the AI Act introduces a blanket ban on scraping of pictures:
- “AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.”
The reasoning behind this prohibition is based on concerns over mass surveillance and potential infringements on fundamental rights and freedoms, particularly the right to privacy. The prohibition is the direct result of the Clearview AI scandal.
Additional ban concerns to biometric categorisation:
- “biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation”
Biometric categorisation systems are legally defined as AI systems for the purpose of assigning natural persons to specific categories on the basis of their biometric data, unless it is ancillary to another commercial service and strictly necessary for objective technical reasons. The ban on using such systems was also advocated by the European Data Protection Supervisor and European Data Protection Board because of their potential for discrimination. According to the AI Act, this prohibition does not cover any labeling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement.
Finally, the restriction deals with the use of real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement. This prohibition is labeled as restriction, as the AI Act contains precise requirements that need to be met to allow such use. Remote biometric identification system is defined as an “AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database.” Additionally, real-time remote biometric identification system means a remote biometric identification system, whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising not only instant identification, but also limited short delays in order to avoid circumvention. Public space is legally defined as “a physical space accessible to an undetermined number of natural persons regardless of capacity or conditions for access.” The law enforcement agencies may use such systems in narrowly defined cases of targeted search for specific victims of specific crimes, the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons, a genuine and present or genuine and foreseeable threat of a terrorist attack, the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences listed in the AI Act. Such use should be proportional and reflect the specific circumstances and consequences of the use. These systems are also obliged to undergo a fundamental rights impact assessment and are subject to prior judicial (or other independent body’s) review. The EU member states may specify conditions for the use of such systems in their national legislation.
In summary, the AI Act’s restrictions on certain practices underscore the EU’s commitment to protecting individuals’ rights and democratic values. It’s clear that any use of AI which invasively infringes upon human dignity, autonomy, or fundamental freedoms is strictly off-limits. In our next post, we’ll take a closer look at high-risk AI applications and how to identify them.