Stance on the regulation of Generative Artificial Intelligence
Mesarcik, M., Slosiarova, N., Podrouzek, J., Bielikova, M.
This document is the KInIT Stance on the regulation of Generative Artificial Intelligence.
In this stance we are presenting our positions on the selected aspects of regulation of general purpose AIs, foundation models and generative AI systems as proposed by the positions of the European Parliament and Council towards Artificial Intelligence Act.
Our concerns primarily revolve around the correct definition of general purpose systems, foundation models and generative AIs. Further suggestions are made towards focusing on short-term risks, transparency, privacy and data governance, ex-ante auditability and regulatory oversight. We are also proposing know-your-customer checks.
In our view, the proposed definition by the Council of the EU is very broad and includes applications not specific to general-purpose AIs, e.g. translation.
In our opinion, the distinction between foundation models and general-purpose AI shall be more thoroughly explained. Furthermore, vague notions like broad scale or wide range of applications may be subject to restrictive interpretation from the providers thus escaping the scope of requirements.
We should focus on already existing risks posed by AI and generative AI in particular, including transparency, bias, privacy, human oversight or sustainability.
From the point of view of regulation, the limits of the deployment and use of generative AI systems must be clearly established, as well as the range of persons who can come into contact with them.We are of the opinion that users of generative AI systems should be informed that they are not interacting with a human, or that they are receiving output that has been machine generated. Providers or deployers should provide such information where the interaction with the system takes place, along with a warning that the generated output may also contain information that isn’t true or verified. This warning shall be visible during the interaction with the generative AI.
We understand limited obligations towards generated content for purposes of freedom of speech. However, such exceptions if introduced shall be carefully balanced considering risks of generative AI. We are of the opinion that providers of foundation models and generative AI systems shall document provenience of data sources used for training models together with information required by EU data protection law.
In our opinion, requirements for foundation models and generative AI shall be auditable with the aid of the third party, if such model or system is intended to be used in the high-risk area.
We believe that the European Commission is best suited to provide uniform and effective oversight over foundation models and generative AI.
We believe that general-purpose/ generative AI system/ foundation model providers shall conduct “Know-Your-Customer” checks to ensure that their models are used according to provided instructions, thus mitigating the risk of aiding any human rights abuse.
Cite: Mesarcik, M., Slosiarova, N., Podrouzek, J., Bielikova, M. Stance on the Regulation of Generative Artificial Intelligence. Kempelen Institute of Intelligent Technologies. October 2023.
This work was supported by the Slovak Research and Development Agency under the Contract no. APVV-22-0323 (Philosophical and methodological challenges of intelligent technologies).