With the advent of the generative artificial intelligence systems deployed for public use, the fast-paced development of the technology inevitably disrupts current regulatory frameworks. Many individuals, but also businesses seize opportunities for generating artificial content with these systems, sparking questions that are not sufficiently answered by legislation or judicial interpretation.
These questions relate to two broad areas: legality of data used for training generative AI systems that are based on large language models (LLMs) and legal use of artificially generated contents. Training data may contain large volumes of personal and non-personal data and potentially copyrighted materials. Additionally, questions concerning the ownership and commercial use of generated content are essential from the point of legal use of such content. However, generative AI systems carry risks including generating violent or disinformation content, deep fakes, potential bias or issues related to liability and accountability.
The dissertation shall contribute to a more nuanced understanding of generative AI and its regulatory landscape within the EU. The topic include many research challenges and areas including:
- Copyright challenges and generative AI systems
- Data protection challenges and generative AI systems
- Transparency of generative AI systems
- Accountability and liability of providers and users of generative AI systems
- Tackling spread of disinformation through regulation of generative AI systems
Supervising team
Matúš Mesarčík
Ethics and Law Specialist
Artificial intelligence (AI) is revolutionizing healthcare, offering immense potential for diagnosis, treatment, and personalized medicine. However, the European Union (EU) faces a complex web of legal requirements aimed at ensuring patient safety, privacy, and ethical use of AI in this sensitive domain. Furthermore, specific legal requirements for AI systems in general are not the same as requirements for medical devices according to the sector-specific regulation.
The dissertation shall delve into the intricate legal landscape surrounding AI-powered healthcare systems in the EU. It may analyze the interplay between the recently adopted EU Artificial Intelligence Act (AI Act) with its risk-based approach and sector-specific healthcare directives addressing medical device safety and ethical considerations. The topic include many research challenges and areas including:
- Identifying potential conflicts and redundancies between the AI Act and existing regulations
- Ensuring sufficient and appropriate level of human oversight AI systems in healthcare
- Development of integrated approach for alignment with the EU AI Act and medical devices regulations
- Ensuring sufficient and appropriate level of transparency in using AI systems in healthcare
- Examining legal measures to ensure fairness and non-discrimination in AI algorithms used for diagnosis, treatment decisions, and resource allocation
- Defining legal frameworks for assigning responsibility for potential harms caused by AI systems in healthcare settings
Supervising team
Matúš Mesarčík
Ethics and Law Specialist