What's
Impressions from the Winter School on Fairness in AI: What does it mean to be fair?
Adrián Gavorník is a research intern at KInIT while he is pursuing a master’s degree in Science, Technology and Society Studies (STS) at the University of Vienna. He is a member of the Ethics and Human Values in Technology team where he is building expertise in ethics based assessment of AI technologies, with special focus on fairness and biases. Earlier this year, Adrián applied to attend the 1st Greek ACM-W Chapter Winter School on Fairness in AI and has become one of the successful candidates. Read about his experience and most interesting learnings.
The 1st Greek ACM-W Chapter Winter School on Fairness in AI took place in February. During two days of intensive series of lectures and workshops, around 70 students from 20 countries had the chance to network and discuss issues related to fairness in AI. In this short article, I would like to share my attendee experience. Coming from a social sciences background, I was a bit concerned that this event would be for more experienced and predominantly computer science students. After becoming a successful candidate, I was surprised to find myself in the company of students with broad interdisciplinary backgrounds.
As a member of the Ethics and Human Values in Technology team here at KInIT, I had the opportunity to join this event to broaden my understanding of fairness in AI from multiple perspectives. This is important, as fairness is one of the core values we often bring up and discuss during ethical assessments that we carry out with both our research teams and industry partners.
The diversity of attendees of the Winter School just highlights that understanding and working towards fairness in AI does not rest solely on the shoulders of computer scientists, but requires inclusion and interdisciplinarity. This was also evident in the majority of the lectures. Let me introduce some of the speakers, their contributions and highlight some of my takeaways. These will mainly consist of interesting case studies and various initiatives and tools that aim to address and measure fairness in AI.
One of the very first lectures was given by Toon Calders from the University of Antwerp. In his introductory lecture, he introduced the topic of fairness in machine learning. What I found crucial was that he contextualized the notion of fairness historically – reflecting on some (debunked) historical assumptions that machine learning will be free from bias and “gut feeling”. Today, it is obvious that this is unfortunately not the case.
Another major takeaway, which we also bring up often during our AI ethics assessment workshops, relates to the legal perspectives on fairness. Namely that when it comes to values such as fairness, they often require us to think beyond mere legal terms. In fact, as pointed out by Toon Calders and evident in the Assessment List for Trustworthy AI (ALTAI), it is crucial to reflect on how fairness is defined, approached but also quantified. Particularly the questions of quantification of fairness and the possible definitions of fairness were then applied to specific and rather well-known cases – racial bias in health care algorithms, Amazon’s recruitment tool that favored men for technical jobs, and the criminal re-offense prediction algorithm COMPAS.
“What does it mean to be fair?” was the core question of the remaining lectures. Several ways to tackle this question and how to quantify fairness were offered.
First, the distinction between group and individual fairness should be established and reflected upon. Each of these then offers various mitigation strategies, e.g. statistical parity in group fairness. In other words, this means equal access to benefits – at the same time, in some cases, it might be hard to fully identify and explain existing differences between groups. In contrast (and incompatible) with statistical parity is “equal odds” which asserts that errors are acceptable, but should be similar for all sensitive groups. A good example where this incompatibility becomes apparent is that of car insurance where males are at higher risk than females. In this case, we might want to assign expensive premiums to high risk and cheap premiums to low-risk clients, but at the same time, we want to be fair in regards to gender.
These issues were further investigated during a panel discussion on fairness in recommender systems. Some theoretical approaches in dealing with fairness in recommender systems were introduced mainly through the lens of the kind of stages in which they are applied: pre-processing stage, in-processing, and post-processing.
Additionally, some industry initiatives like Microsoft Responsible AI and relevant conferences such as ACM Conference on Fairness, Accountability, and Transparency were introduced.
To round off my experience from the ACM Winter School, I would like to delve deeper and bring attention to some of the existing initiatives and practical tools that were introduced and which helped me make sense of some of the theoretical concepts introduced earlier.
Firstly, to help developers and researchers make sense of the various possible definitions of fairness and how these will influence the accuracy of their models, the Aequitas audit toolkit was introduced. Here, I found the “Fairness Tree” particularly useful and possibly relevant for developers to navigate through the complex area of fairness in AI.
More than just a toolbox, the AI and Equality platform offers a great list of various resources such as journal articles, books, and surveys but also a proposed workshop outline that aims to bring questions related to the ethics and fairness of AI to the forefront.
A human rights perspective towards AI is applied to an interactive and practical workshop format that allows the participants to analyze various datasets for biases, or to test how various definitions of fairness compare to each other.
I consider this experience to be very important and beneficial. Topics of fairness and biases are something that we often deal with in the Ethics and Human Values team here at KInIT, especially when conducting ethical assessments of AI systems with our research teams or industry partners.