The AI Act Chronicles: What does adoption of the AIA mean for you?

In an era where artificial intelligence (AI) permeates various aspects of our lives, the European Union (EU) has taken a significant step forward with the enactment of the EU Artificial Intelligence Act (AI Act, the Act). It was proposed by the EU Commission in April 2021 and after months of uneasy negotiations it was finally approved by the EU Parliament on the 13th of March this year. This piece of legislation aims to provide a framework that not only ensures consistency for providers and deployers but also prioritizes safeguarding security, health and the rights of EU citizens.

The preamble of the Act aptly acknowledges the dual nature of AI, recognizing its potential benefits alongside the necessity to address associated risks to fundamental rights, the environment, and democratic processes. Various instances of biased systems infringing on rights of citizens or AI generated deepfakes influencing elections serve as sobering reminders of the importance of regulatory measures.

Central to the AI Act is its risk-based approach, which categorizes AI systems into different risk groups, each subject to certain obligations, usually the higher the risk the more stringent they are. These groups include banned practices, high-risk, limited risk, and low risk, with additional obligations for providers of general-purpose AI models.

Examples of banned practices within the AI Act include among others emotion recognition in workplaces and educational settings, social scoring for public and private purposes, and biometric categorization of individuals. 

The focal point of this regulation are high-risk AI systems. High-risk AI applications span a wide spectrum, ranging from biometrics to critical infrastructure, education and vocational training, employment and workers management, essential private and public services (e.g. healthcare, banking and insurance), certain systems in law enforcement, migration and border management, or justice and democratic processes. Some of the obligations for these systems include assessing and reducing risks, maintaining use logs, certain transparency obligations and ensuring human oversight. Extensive obligations are related to the appropriate data governance and practices including having sufficiently representative training, testing and validation datasets. Before placing high-risk AI systems on the EU market, providers are also subject to the process of conformity assessment. Other specific requirements (including conducting human rights impact assessment in certain cases) are set forth for deployers. 

Following its adoption by the European Parliament and the Council on the 13th of March 2024, the AI Act will come into effect twenty days after publication in the official EU Journal. Its full applicability will be realized twenty-four months thereafter, with a phased implementation approach. Within six months, prohibited systems must be decommissioned, followed by the application of obligations for general-purpose AI after twelve months. High-risk systems explicitly defined within the regulation become subject to all rules of the AI Act within twenty-four months, with additional obligations for high-risk systems that also fall under other EU harmonization rules after thirty-six months.

Effective oversight is paramount to any implementation of regulation. In case of the AI Act, Member States are tasked with designating national competent authorities to supervise application and implementation, along with conducting market surveillance. Additionally, each Member State must appoint a national supervisory authority as an official point of contact with the public, representing the country in the European Artificial Intelligence Board. This Board will be supported by an advisory forum which should comprise a diverse array of stakeholders, including industry, start-ups, SMEs, civil society, and academia. Furthermore, the European AI Office was established within the Commission. Its responsibilities should include supervision of the general-purpose AI models, cooperation with the European Artificial Intelligence Board and it should be supported by a scientific panel of independent experts.

In summary, the EU AI Act represents a significant milestone in ensuring responsible development of trustworthy AI while simultaneously upholding fundamental rights and democratic values. By adopting a risk-based approach and establishing comprehensive oversight mechanisms, the EU aims to strike a balance between fostering innovation and safeguarding the interests of its citizens.

Consider this short blog to be an introduction to our series that will delve deeper into the specifics of the AI Act. In this post, we will briefly explain the background of the AI Act, its aims and basic philosophy. In our next entries we will elaborate more on the scope of the AI Act, banned practices, high-risk AI systems, transparency obligations, requirements for general purpose AI models, fundamental rights impact assessments or regulatory sandboxes and application to small and medium enterprises.