KInIT presents its stance on the EC’s Artificial Intelligence Act (AIA)

The long-awaited and first-ever attempt to regulate AI, known as the Proposal for a Regulation of Artificial Intelligence known as Artificial Intelligence Act (AIA), was presented in April 2021 by the European Commission. Since then, the proposal has been open to a public debate to improve and sharpen its current form. Also, Kempelen Institute of Intelligent Technolgies (KInIT) has taken this opportunity and compiled its own Stance1 to address ten crucial concerns on the future regulation of AI. 

Our stance is unique in the way that it brings together insights and expertise from KInIT AI researchers and simultaneously places ethics and the pragmatic implementation of AI systems in real life at its core.

We highlight ten areas that require further discussion, for that we also propose concrete suggestions to the AIA to contribute to its improvement. 

Our concerns

In particular, we offer an adjusted, more specific definition of AI to minimize potential diverse interpretations resulting in vagueness. Moving forward, we contribute to the debate on banned and high-risk systems, with emphasis on procedural and substantial aspects to be added in Annex III.

We understand that the EC’s Proposal for Regulation of AI builds on the work of the High-Level Expert Group on AI regarding the ethical implications of AI appointed by the EC itself. Therefore our stance underlines the importance of the amended Proposal to include the ethical implications in the binding part of the regulation to place conformity to ethics at its core.

In other parts of the Stance, we prefer a more balanced regulation approach to systems that are about to be introduced and those who are already on the market. Thereby, pinpointing the potential dangers of the double-track effect if the proposed regulation stays in its current form. 

We challenge the EC’s proposed version of banned biometrics and argue that any form of complete ban for this technology can result in many missed opportunities and hinder useful innovations. On the other side, we propose to categorise deepfakes as high-risk AI systems and we criticise the lack of definition of inappropriate use of deepfakes in AIA.

We also support the need for EU supranational bodies for strengthened EU oversight. We urge the new Proposal to consider subjecting public authorities to administrative penalties, too, to equalize the conformity with given rules between citizens and public authorities at the same time. 

Towards trustworthy AI

Overall, KInIT stance on AIA reflects our joint effort to improve the final wording of this legislation and offer a critique that is aimed to help sharpen the endeavour to regulate artificial intelligence in an effective way. In the end, we hope that the proposed legislation will be successful in achieving its goals to benefit from the development and deployment of trustworthy artificial intelligence systems in our lives.