TAILOR Handbook: an encyclopedia of terms related to Trustworthy AI
The TAILOR Handbook of Trustworthy AI contains the most important scientific and technological terms associated with Trustworthy Artificial Intelligence. Its major objective is to give non-experts, researchers and students an understanding of the issues surrounding the development of ethical and reliable AI systems.
The Handbook creation was coordinated by Umberto Straccia and Francesca Pratesi from the Institute of Information Science and Technologies of the National Research Council of Italy. When interviewed, they stated that trustworthy AI is a term that consists of various dimensions. Whereas some of them (such as security and privacy protection) have been examined for a long time, others, (e.g., explainability and sustainability) have just come up in recent years. There is a certain lack of common ground in both terms and definitions.
The idea of the TAILOR Handbook of Trustworthy AI came from the ambition to create a common language in the field of AI. The handbook has an encyclopedia-like structure, which was created by building on already-existing taxonomies and definitions and moving up the conceptual hierarchy of different ideas.
Both experts and rookies will benefit from this work. By providing reading suggestions, links, summaries, and examples, it can help readers to better comprehend AI, as well as deepen their understanding of it.
The concept of trustworthy AI consists of many aspects, including explainability, safety, justice, accountability, privacy, and sustainability.
TAILOR Handbook of Trustworthy AI includes definitions related to:
- Explainable Artificial Intelligence: One of the ethical dimensions studied in the TAILOR project is explainable AI. In this chapter, you can find the main elements that define the explanation of AI systems and also an overview of the several methods to provide multimodal explanations.
- Safety and robustness: As artificial intelligence becomes mainstream, concerns about its risks are growing. This chapter of the handbook covers the main elements that explain the safety and robustness of AI systems.
- Fairness, Equity and Justice by Design: In this chapter, the authors discuss the possible causes of discrimination as well as the explanation of bias and segregation. They also focus on what fair machine learning might be and what metrics can be used to measure (un)fairness.
- Accountability and Reproducibility: This chapter focuses on the key aspects of Trustworthy AI. Accountability and reproducibility are interrelated concepts, where the first term points out the liability and prevention of misuse. Reproducibility is more concerned with metrics, quality standards, and procedures to model the development of learning methods for AI.
- Respect for Privacy: This chapter provides a quick overview of the key privacy models, their attributes, and the primary privacy methods that can be used to enforce the corresponding privacy properties or to offer substantial guarantees of attack resistance.
- Sustainability: The last chapter points out the newest challenges our society is facing. It covers the key components that constitute sustainability in AI systems. Some of them may apply to computer systems in general, while others are related to the sustainability of AI models.
An index with a list of all alphabetically ordered entries concludes the handbook, along with a short definition, a map to locate the term in the handbook, and a link to read more about it.
The whole Handbook is available in the form of a publicly accessible Wiki here: http://tailor.isti.cnr.it/handbookTAI/TAILOR.html
TAILOR is a Horizon EU project with the aim to build the capacity to provide the scientific foundations for Trustworthy AI in Europe. TAILOR develops a network of research excellence centers focused on responsible research of artificial intelligence. Find out more about our involvement in the project here. You can also read about the TAILOR Roadmap of Trustworthy AI.