What's
The AI Act Chronicles: Red Lines (part 1)
In our previous blogs we have delineated the general background and philosophy of the EU’s Artificial Intelligence Act (AI Act, AIA) and its scope. As discussed, the AI Act is based on analysis of risks for health, safety and fundamental rights and freedoms stemming from deployment and use of AI systems. On the top of the regulation pyramid are prohibited AI systems in the EU – banned practices. This blog delves into the issue of what is banned and why.
When deciding what to ban and not to ban, the legislator has two choices. They can provide requirements for assessing when the AI system shall be banned e.g. AI systems that have detrimental effects on fundamental rights or freedom or involve subliminal manipulation of individuals. Alternatively, the legislator may opt for providing a list of AI systems (or their applications) that are banned. The latter is exactly the case of the AI Act.
The list can be found in Article 5, and contains 7 explicitly listed banned practices and one restricted use – the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement. The Impact Assessment accompanying the original proposal of the AIA states that if these AI systems had not been banned, they would go against the EU values of democracy, freedom and human dignity, and violate fundamental rights, including privacy and consumer protection.
First two of the prohibited cases deal with using AI systems deploying subliminal techniques, manipulative techniques or exploiting vulnerabilities of individuals or a group of persons.
The AIA prohibits:
- “AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.”
The key for triggering the scope of this ban is what constitutes a subliminal technique beyond a person’s consciousness or purposefully manipulative or a deceptive technique. These notions are unsurprisingly not defined in the AI Act. According to the recommendations from the Council of Europe in this context, attention should be paid particularly to capacity of AI systems to use personal and non-personal data to sort and micro-target people, to identify individual vulnerabilities and exploit accurate predictive knowledge, and to reconfigure social environments in order to meet specific goals and vested interests. According to the Impact Assessment, this prohibition is justified by the increasing power of algorithms to subliminally influence human choices and important decisions interfering with human agency and the principle of personal autonomy.
According to the wording of the ban, it is sufficient for triggering the ban to establish that there is “the effect of materially distorting the behaviour of a person or a group of persons.” In order for an AI system to be classified as banned, it would need to impair the ability of an individual to make an informed decision and as a consequence (reasonably likely) cause significant harm. Again, what exactly is significant harm is not legally defined, making space for different interpretations. As evidence in support of establishing this prohibition the legislator provides cases of manipulative digital assistants. This prohibition does not apply in the context of medical treatment such as psychological treatment of a mental disease or physical rehabilitation, under the condition that these are carried out in accordance with the applicable law and medical standards, for example explicit consent of the individuals concerned or their legal representatives. Common legitimate commercial practices such as those in the field of advertising should not fall under this ban.
Additionally in this category, the AIA prohibits
- “AI systems that exploit any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.”
The second prohibition in question considers vulnerable characteristics of some individuals including children, elderly, differently abled people or specific social or economic qualities that are likely to make people more vulnerable to exploitation (people living in extreme poverty, ethnic or religious minorities). Although it may seem that the scope of this prohibition may, in some cases, overlap with the previous one, it specifically targets practices that do not necessarily use subliminal or similar techniques, but instead exploit the diminished judgmental autonomy of certain individuals. This prohibition is in line with several international recommendations and includes situations such as using AI-powered applications containing sensitive private information or locking users into filter bubbles.
Next prohibition concerns social scoring and predictive policing. The AI Act prohibits:
- “AI systems for evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:
- detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected
- detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity.”
This ban applies to both public and private actors, resembling the infamous social credit scoring system that has been piloted in some Chinese regions. This prohibition is warranted because large-scale citizen scoring would unjustly limit individuals’ fundamental rights and contradict the values of democracy, freedom, and the principles of equal treatment under the law and respect for human dignity. It is of the essence to note that the prohibition also applies in case of “inferred” characteristics as these are not directly observed but instead deduced from available data through analysis or modeling presenting unique issues from the privacy, fairness and accuracy point of view.
In the next part we will take a closer look at the remaining banned practices including predictive policing, scraping of pictures, inferring emotions and the use of biometry.