The road to future AI is paved with trust
The presence of artificial intelligence (AI) in our everyday life is increasing and many researchers believe that what we have seen so far is only the beginning. However, AI must be trustworthy in all situations. TAILOR is an EU project that has drawn up a research-based roadmap intended to guide research funding bodies and decision-makers towards the trustworthy AI of the future.
TAILOR is one of six research networks set up by the EU to strengthen research capacity and develop the AI of the future. The foundation of trustworthy AI is being laid by TAILOR, by drawing up a framework, guidelines and a specification of the needs of the AI research community. “TAILOR” is an abbreviation of Foundations of Trustworthy AI – integrating, learning, optimisation and reasoning.
The TAILOR network consists of 55 partners from all around Europe. “We strive for excellent and responsible research of intelligent technologies. Being responsible in AI research means that we should be aware of possible risks and be prepared to face them before they even materialise,” says Mária Bieliková, the lead scientist on the project for slovak.ai, the Slovak TAILOR partner that involves researchers from the Faculty of Mathematics, Physics and Informatics at the Comenius University and from the Kempelen Institute of Intelligent Technologies.
“The development of artificial intelligence is in its infancy. When we look back at what we are doing today in 50 years, we will find it pretty primitive. In other words, most of the field remains to be discovered. That’s why it’s important to lay the foundation of trustworthy AI now,” says Fredrik Heintz, professor of artificial intelligence at Linköping University, and coordinator of the TAILOR project.
The roadmap presented by TAILOR is the first step on the way to standardisation. The decision-makers and research funding bodies can now gain insight into what is required to develop trustworthy AI. It is important to point out that many research problems must be solved before this can be achieved.
The researchers have defined three criteria for trustworthy AI:
- it must conform to laws and regulations,
- it must satisfy several ethical principles,
- and its implementation must be robust and safe.
These criteria pose major challenges, in particular the implementation of the ethical principles.
Many of the legal proposals written within the EU and its member states are written by legal specialists who lack expert AI knowledge, which might be a problem. Legislation and standards must be based on knowledge. This is where researchers can contribute by providing information about the current forefront of research for making well-grounded decisions. It is crucial that experts have the opportunity to influence questions of legal nature.
“The concept of trustworthy AI is based on ethical principles and legal frameworks but we have to keep in mind that trust is something you should earn. There are many stakeholders who are participating in the research, development or deployment of AI systems. These people should be deemed accountable based on their capabilities for minimising the harms and maximising the benefits they can bring to society through AI. They should be committed to developing AI that will bring good to society,” says Mária Bieliková.
People often regard AI as a technology issue, but what’s really important is whether we gain societal benefit from it. If we are to obtain AI that can be trusted and that functions well in society, we must make sure that it is centred on people.
The project focuses on large comprehensive research questions, and will attempt to find standards that everyone who works with AI can adopt. This can only be achieved if basic AI research becomes a priority.
The complete roadmap is available at: Strategic Research and Innovation Roadmap of trustworthy AI