TAILOR: Foundations of Trustworthy AI – integrating, learning, optimisation and reasoning
Artificial Intelligence (AI) has grown at an unprecedented pace in the last decade. It has been applied to many industrial and service sectors, becoming ever-present in our everyday life. Digital technologies improve and simplify our lives, but they also raise new ethical, societal and legal issues.
The TAILOR project is building a network of excellence centres on the foundations of Trustworthy AI. TAILOR’s vision is to make Europe the global role-model for responsible AI. This initiative brings together communities in an academic-industrial network with the aim of developing the scientific foundations for realising the European vision of human-centred Trustworthy AI.
The purpose of the EU Project TAILOR is to build the capacity to provide the scientific foundations for Trustworthy AI in Europe. This should be achieved by developing a network of research excellence centres leveraging and combining learning, optimisation, and reasoning.
Thanks to the project, KInIT researchers together with researchers from Comenius University are present at shaping the future of AI in Europe. The extensive network of project partners connects EU’s top-grade researchers, creating unique opportunities and interactions for our researchers.
The TAILOR project has 5 main research areas:
- Trustworthy AI – developing the foundations for trustworthy AI
- Paradigms and representations – combining and integrating learning, reasoning and optimization
- Acting – learning and reasoning to plan, act and monitor behavior
- Social AI – learning and reasoning for multi-agent interactions and human AI collaboration
- Auto AI – automating the development and deployment of AI systems and democratize the access to state-of-the-art technology
More and more often, artificial intelligence systems are used to suggest decisions or to propose actions to human experts. Because these systems might influence our lives and have a significant impact on the way we decide, they need to be trustworthy.
- How can a radiologist trust an AI system analysing medical images?
- How can a financial broker trust an AI system providing stock price predictions?
- How can a passenger trust a self-driving car?
- Can AI decide who will advance to the next round of a job interview?
- Can AI select patients for preventive health examinations?
These are just a few examples of fundamental questions that require deep analysis and fundamental research activity. AI professionals should be educated in the scientific foundations of Trustworthy AI, so they are able to design trustworthy systems of artificial intelligence.
Artificial intelligence brings huge opportunities. But many times we don’t understand the outcomes or decisions of AI. It is therefore important to align the development and deployment of AI systems with ethical principles and regulatory requirements. This requires a sensitive approach and the ability to understand these systems in their complexity and with regard to the broader social and legal context.
Mária Bieliková, Lead and Researcher, KInIT
Kempelen Institute of Intelligent Technologies
Partners
Project team
Maria Bielikova
Lead and Researcher
Michal Kompan
Lead and Researcher
Viera Rozinajová
Lead and Researcher
Martin Tamajka
Research Engineer
Branislav Pecher
PhD Student
This project is funded by the European union.