ESSAI 2025 – Ethics and Law in Trustworthy AI: Foundations and Applications

Trustworthy AI, which integrates legal and ethical considerations, is a critical focus in responsible AI development and use. Highlighted by key European Union initiatives like the Ethics Guidelines for Trustworthy AI (EGTAI), the Assessment List for Trustworthy AI (ALTAI), and the Artificial Intelligence Act (AIA), it underscores the importance of ethical principles in AI governance. Our course for the 3rd European Summer School on Artificial Intelligence introduces the concept of trustworthy AI, its foundational ethical principles, and its alignment with EU legal frameworks, including the AIA and other relevant regulatory initiatives in the digital space. 

The course will cover several areas of trustworthy AI from the perspective of EU law and AI ethics with a focus on the AI Act (AIA) and Ethics Guidelines for Trustworthy AI (EGTAI). Our aim is not only to introduce you to basic concepts of hard (law) and soft (ethics) AI regulation but also to raise your moral sensitivity in AI research and development practices. We provide various examples of how key requirements on trustworthy AI may be operationalised in practice and with some of the most pressing ethical and societal risks based on our expertise in conducting AI ethics-based assessments with diverse organizations and teams. 

Lecture 1 – Trustworthy AI and EU regulatory frameworks

In the first lecture, we will introduce the concept of different modalities of regulations of social behaviour. Furthermore, we will explain the key differences between regulation by law and by ethics. Deriving from different modalities of regulation, we will be explaining if technologies including AI are value-neutral. Furthermore, we will be discussing the concept of trustworthiness of the AI. Then we will introduce digital regulation in the European Union that affects research and use of AI. We will focus on how they work, what kind of assessments of AI systems they propose on AI providers, and how these obligations serve to identify and mitigate potential risks.

Lecture 2 – The role of ethics in AI 

In the second lecture, we proceed to the explanation of the softer form of AI governance, AI ethics. We introduce the brief history of AI ethics, the problem of principlism and other current challenges. We will introduce the concept of “ethification” of ICT regulation and how ethics and law play the game together via ethics-based assessments.  We will identify what are the red lines in using AI, what AI systems are banned according to the AI Act and how to define the red lines in AI. Then we delve into the analysis of AI stakeholders and affected groups. We will explain different types of stakeholders and different strategies for their engagement during the development and deployment of AI systems.

Lecture 3 – Human agency and data governance

We will apply the Assessment List for Trustworthy AI (ALTAI) to address concerns about human autonomy and control. We will delve into questions on the possibility of manipulation of their behavior, and the consequences of overreliance on machine decisions. We introduce the regulatory requirements on human oversight and discuss the role of human control in algorithmic decision-making. The specific focus will be on the explanation of the role of humans in the context of the AI Act and GDPR concerning automated decision-making. Regulatory requirements for training AI models on personal data and related legal provisions will be elaborated. We explain how to identify fair data sources from the perspective of data ethics as accountable practice. This step is essential from the point of view of law, as it will provide students with an overview of the data regulation related to AI systems.

Lecture 4 – Transparency and Fairness in AI

In this lecture, we introduce other key areas of trustworthy AI from the ALTAI, such as transparency, and fairness. We will introduce the various levels of AI transparency from legal and ethical perspectives. Specific legal provisions regarding awareness, traceability and potential right to explanation will be discussed. We will examine the distinctions between requirements on explainability and situations in which stakeholders can also properly understand the system’s decisions. We reflect on the issue of fairness and non-discrimination against the background of different definitions of fairness and introduce the problem of algorithmic bias and its flagging. We point out the importance of stakeholder feedback and engagement of various affected groups in the design process.

Lecture 5 – AI accountability and management of ethical risks

The final lecture will be dedicated to the problem of legal and ethical accountability. We introduce the concept of responsibility gaps and explain various concepts of accountability in law and ethics, including liability, moral responsibility and forward-looking perspectives on accountability. The lecture will briefly outline issues connected to AI liability. We introduce the definition and methodology for assessing the ethical risks based on likelihood, severity and exposure to impacts considering the most affected stakeholders. We discuss the methods to mitigate the most imminent risks and suggest the responsible roles that need to be ascribed to perform effective countermeasures also in the context of the moral sensitivity of AI practitioners.

Explore More: Further Reading and Resources