What's
The AI Act Chronicles: Scope watch
In our previous blog, we dove into the basics of the EU’s Artificial Intelligence Act (AI Act, AIA). We explored the thoughts behind it, the timing of implementation and who will be responsible for the oversight. Now, let’s get down to the specifics!
This blog tackles a crucial aspect of the AI Act: its scope. We’ll break down what kind of AI systems fall under the Act’s regulations (the positive scope) and which ones are exempt (the negative scope). We’ll also look at the definitions of some of the key actors in the AI landscape who will be held accountable under the Act’s provisions.
By the end of this blog, you’ll hopefully have a clearer understanding of which AI systems need to comply with the Act’s requirements and who are the actors that share the responsibility for ensuring this compliance.
First and foremost, when talking about the scope of the AI Act it is important to look at the definition of an AI system it introduced in Article 3. It was adapted almost word for word from the definition featured in the OECD Recommendation of the Council on Artificial Intelligence and it reads as follows: “AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. While there currently isn’t any further explanation from the regulator on what exactly do the terms such as “autonomy” or “explicit” and “implicit” objectives mean, the OECD published an Explanatory memorandum that includes thorough guidelines on how to interpret them. This definition is a part of the material scope of the Act.
The scope of the Act can be found in Article 2. A list of persons that have obligations further in the text or are otherwise affected by the Act is presented there, constituting the personal scope of the AIA. Among these are
- providers and deployers of AI systems based within the EU (including EU institutions, bodies, offices and agencies) but also outside the EU,
- importers and distributors of AI systems,
- product manufacturers placing on the market or putting into service an AI system together with their product,
- authorised representatives of providers which aren’t established in the EU and
- affected persons located in the EU.
Most of the obligations in the Act are, however, prescribed to the providers and deployers so it is only appropriate to mention their definitions. Provider is a person or public authority that develops or has developed an AI system or a GPAI model and places it on the market under its own name or trademark, whether for payment or free of charge. Deployer on the other hand is a person or any public body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity, meaning they’re not the owner of the AI system and are just using an AI system already developed and marketed by someone else for professional purposes.
The territorial scope of the Act covers AI systems and GPAI models placed on the EU market by actors both established within the EU and outside of the EU and also AI systems of which the providers or deployers are established in a third country but the outputs of the system are used in the EU. To better understand the latter, imagine a situation where a hospital in Berlin contracts an Indian medtech company to analyze MRI scans using their AI system, which processes the images and provides diagnostic reports that will be sent back to the hospital. In this scenario, the Indian AI system processes EU data and provides results to an EU entity but is not placed on the EU market or put into service within the EU. The AI Act, however, still applies in this situation as it is foreseen by Recital 22.
In a nutshell, it doesn’t matter whether the provider or deployer is from the EU, US or even from Australia but as long as the AI system is placed on the market within the EU or its outputs affect persons located within the EU the AI Act will apply.
As it was already mentioned, the AI Act also outlines certain exceptions to its application. Here’s a quick breakdown of what is exempt:
- Military and Defense (Recital 24): AI systems developed and used solely for military, defense, or national security are exempt, regardless of the entity using it. This exemption is justified both by the fact that national security remains the sole responsibility of Member States and by the specific nature and operational needs of national security activities and specific national rules applicable to those activities.
- International Cooperation (Recital 22): The Act doesn’t apply to public authorities of third countries or international organizations acting within a framework for law enforcement and judicial cooperation with the EU or its Member States, provided they offer adequate safeguards for protecting fundamental rights and freedoms and the cooperation has been established through bilateral agreements.
- Scientific Research (Recital 25): In order to support innovation and ensure the freedom of sciences guaranteed by the EU, AI developed solely for scientific research and development is also exempt from the scope. However, any other AI system that may be used for the conduct of any research and development activity should remain subject to the provision of the AI Act.
- Personal Use: AI systems used by natural persons for purely personal, non-professional activities are also exempt.
- Pre-Market Development (Recital 25): Provisions of the Act (excluding the ones regarding sand-boxes and real-world testing) also don’t apply to product-oriented research, testing, and development activities before the AI system or model is put on the market.
- Free Open-Source Licenses (Recitals 89, 102 and 103): AI systems and models released under free and open-source license are exempt since they can contribute to research and innovation in the market and can provide significant growth opportunities for the Union economy (transparency obligations still apply). The exception doesn’t apply to AI systems that fall within banned practices or are considered high-risk.