TECHNE: Philosophical and methodological challenges of intelligent technologies

Together with the Institute of Philosophy of the Slovak Academy of Sciences, we will analyze contemporary philosophical and methodological issues of AI systems and intelligent technologies in general.

Intelligent technologies and AI systems especially, such as recommender systems, autonomous vehicles, or AI-powered biometrics give rise to various philosophical and methodological challenges. The most pressing of these issues arise in moral and legal philosophy. For example, how should an intelligent technology system decide in a situation of moral uncertainty? Can a machine have a moral standing? Can it ever have moral or legal rights and duties? Who, if anyone, is morally culpable or legally liable for wrongs seemingly done by the machine?

The impacts of intelligent technologies on society seem to lead to moral culpability and legal liability gaps. In the TECHNE project, our core question is whether this is mere appearance, or whether there really are some gaps in moral culpability and legal liability. If the latter is the case, our legal and moral systems are not suitable for settling all moral and legal problems arising from the use of intelligent technologies, and we must ask whether these gaps can be bridged, and if so, how. This will be the core aim of our exploration.

The project will set out to resolve these timely problems and consider notions that venture beyond single individuals and their backward-looking responsibility: collective culpability, vicarious liability, and forward-looking responsibility will be suggested as appropriate tools to overcome the above problems in moral and legal philosophy. The project will as well methodologically assess the debate on intelligent technologies in moral and legal philosophy, and propose an exhaustive and exclusive classification of techno-responsibility gaps.

KInIT will be responsible primarily for assessing the existing regulatory (ethical and legal) frameworks of intelligent technologies with a specific focus on the European context with the focus on forward-looking responsibility concepts. 

Traditional notions of backward-looking responsibility proved to be inadequate in situations where it is considerably more difficult to reverse negative effects than to avoid their occurrence. On the other hand, responsible research and development of intelligent technologies suffers from the Collingridge dilemma, where the level of control is in contrast with the predictability of negative effects. Therefore, we will compare the applicability of forward and backward-looking perspectives and their combination in the context of the ethics-based assessments of intelligent technologies.

This has practical relevance in the context of high-risk AI systems and potentially prohibited AI practices as presented by the proposal for the EU Artificial Intelligence Act. This leads to another aim of the project that KInIT will follow in the TECHNE project. We will deal with the problem of prohibited practices or red lines in AI in the context of forward-looking responsibility gaps. The lack of general conditions required of intelligent technologies for assessing whether they should be prohibited leads to moral and legal uncertainty. We need to clarify who will be accountable for deciding which features of an AI system pose imminent risks to society, who will be responsible for the mitigations of these risks, and who will be responsible for identifying risks that cannot be balanced out by any countermeasures.


KInIT will also reflect on the forthcoming regulation, mainly in the context of the recent directive targeting AI liability rules. We will take procedural aspects into account, including disclosure of evidence and rebuttable presumption of causality. The proposal is highly relevant to the discussions above as it reflects the intended regulatory answer to the aforementioned issues. Additionally, the EU proposed revision of the current product liability framework to include digital manufacturing files and software with the aim to modernize the law for the digital era.

podrouzek web

Questions of moral responsibility, accountability and liability are present in the core concept of trustworthy AI. They should be answered for every AI system that can be placed on the market, especially for AI systems that pose high risks to our health, safety or fundamental rights.

Juraj Podroužek, Ethics and Human Values in Technology Team Lead

Kempelen Institute of Intelligent Technologies

Partners:

Project team

Juraj Podroužek
Lead and Researcher
Matúš Mesarčík
Ethics and Law Specialist
Štefan Oreško
Researcher