The AI Act Chronicles: Make fundamental rights and freedoms great again

In our previous blogs, we have explained the general background and philosophy of the EU’s Artificial Intelligence Act (AI Act, AIA) and its scope. In the latest instalment, we explored the so-called ‘red lines’ (part 1 and part 2). Today, we are going to look into one of the cornerstones of the AI Act – fundamental rights and freedoms and related requirements for assessing the impacts of AI systems on these rights. 

EU Fundamental Rights Framework 

The Charter of Fundamental Rights of the European Union outlines the essential rights and freedoms enjoyed by individuals within the EU. This pivotal document, which encompasses personal, civic, political, economic, and social rights, was adopted in 2000 and became legally binding in 2009. It is a part of the primary EU legislation, meaning that fundamental rights are positioned as core values of the EU. The Charter includes a wide span of rights and freedoms including the right to life and dignity of every person, personal liberty and security, freedom of thought, religion, expression, and assembly, and the principle of equality, which guarantees non-discrimination. As a cornerstone of the EU’s identity, the Charter embodies democratic values, the rule of law, and respect for human dignity, influencing key policies in areas such as migration, security, social justice and now also AI. 

The role of fundamental rights within the EU is significant and multifaceted. It provides a binding legal framework requiring EU institutions and member states to uphold these rights when applying EU law, thereby fostering justice and equality. It also empowers individuals to challenge violations of their rights by EU institutions or member states implementing Union law. Last but not least, fundamental rights are also vital for safeguarding individuals from the point of view of corporate responsibility and products on the EU market including AI systems.

Why do we need to consider fundamental rights during the AI lifecycle?

There are several documented occasions where the deployment of AI systems led to violations of fundamental rights and freedoms. A notable example is the deployment of facial recognition technology by law enforcement agencies. In several EU countries, including France and the Netherlands, police have utilized AI-based facial recognition tools to identify individuals in public areas. This practice has sparked significant concerns regarding privacy and freedom of expression, as it can result in intrusive surveillance and deter public dissent. Evidence suggests that such systems have been employed to monitor protests and public gatherings, potentially discouraging individuals from exercising their right to peaceful assembly.

Obligations within the AI Act

Respect for fundamental rights is explicitly recognized as one of the purposes of the AI Act in Article 1. Simultaneously, fundamental rights violations caused by the deployment and use of inappropriate AI systems are one of the main reasons why the AI Act was adopted in the first place. There is, however, more than just this one provision. The AI Act includes specific obligations that need to be met when it comes to the protection of fundamental rights and freedoms. 

These obligations take the form of a requirement to conduct a fundamental rights impact assessment (FRIA). The FRIA is an assessment process to identify the specific risks to the rights of individuals or groups of individuals likely to be affected by an AI system and identify measures to be taken in case these risks materialize. Article 27 further specifies that apart from identifying risks, rightsholders and countermeasures, the outcome of the assessment should also include a description of the deployer’s processes and the period of time in which the high-risk AI system is intended to be used and the frequency of its use and a description of human oversight measures to be implemented according to the instructions for use. 

FRIA is obligatory for several subjects in different situations:

For providers of high-risk AI systems / AIA Art. 9(2)(a):

  • As a part of broader risk assessment

For deployers of high-risk AI systems / AIA Art. 27(1): 

  • If the deployer is a body governed by public law or a private entity providing public services (regardless of the sector of use)
  • If the system is used for assessing the creditworthiness of a person or assessing risk for life/health insurance

For law enforcement authorities / AIA Art. 5(2):

  • If the system is used for a ‘real-time’ remote biometric identification (e.g. facial recognition) in a publicly accessible space

For providers of general-purpose AI models and systems:

  • When GPAI systems are used as high-risk AI systems by themselves or are components of other high-risk AI systems / AIA Recitals 85 and 97
  • If the GPAI models pose a systemic risk / AIA Art. 55(1)(b) in the context of Art. 3(65)

Although FRIA under the AI Act is not mandatory beyond the scope discussed above, both providers and deployers of AI systems may still potentially face legal action for breaches of fundamental rights. Member States can also be held accountable for failing to uphold human rights standards within their jurisdiction. This obligation therefore extends to AI applications that are not classified as high-risk under the AI Act but nonetheless affect fundamental rights, as these rights are safeguarded regardless of the system’s risk classification. As a result, we recommend that all providers and deployers who suspect their AI system has any impact on any of the fundamental rights consider conducting FRIA as part of wider risk management. 

KInIT can help – our FRIA

There is currently no official methodology or template for conducting FRIA as intended by the AI Act but heaps of methodologies were made available by researchers and public institutions. Some of them are the length of a short novel, some are hardly applicable to AI systems and most require the assistance of a team of facilitators from different backgrounds (law, ethics, sociology, etc.) or at least certain knowledge of the fundamental rights domain including case law, etc.

That is why after conducting a survey of available methodologies we decided to create our own – tailored to assessing AI systems and aligned with all the requirements of the AI Act. KInIT experts will guide you through the whole process which consists of a couple of workshops and their professional assessment and you will receive a report with everything necessary to fulfil the obligations in Article 27. At the end of the assessment, you will have an overview of the impacts your AI system has on fundamental rights with related risks and even some countermeasures ready for implementation. In case of any questions don’t hesitate to contact us

In our next blog, we’ll take a closer look at high-risk AI applications and how to identify them.