Societal Biases in Slovak AI: Gender Biases
AI systems, if not used properly, can exhibit discriminatory or intolerant behavior towards certain people. We know almost nothing about biases in AI models working with Slovak language. KInIT succeeded in the U.S. Embassy funding opportunity to study the phenomenon of biases in AI, raise awareness about this issue and to provide insights and recommendations to various AI stakeholders in Slovakia, while liaising with experts from the USA.
AI is a ubiquitous technology that became an inherent part of our daily lives when using smartphones, accessing the Web, searching for information, working with photos, buying things online etc. Although AI often provides objectively useful functionality that we would not be able to get otherwise, recently, the issue of societal biases in AI came to light.
We understand societal biases as a phenomenon when AI is behaving in a way that could be deemed discriminative, intolerant or similarly problematic with respect to various demographic groups (e.g., gender, ethnicity, age). These AI biases are often harmful towards people from marginalized communities.
AI systems are created by using data collected from the real world and can inherit our cultural and societal biases. The questions about algorithmic fairness and non-discrimination are also crucial in recent discussions about trustworthiness of AI systems in open and democratic societies. These issues are also subject to regulation, especially from the point of anti-discriminatory law.
The gender bias might be materialized in different types of biased behavior:
- outcome disparity, when the AI makes different predictions for different groups of people (e.g. a model that predicts which candidate to hire might give lower score to female candidates),
- error disparity, when the AI makes more errors when processing data from different groups of people (e.g. a speech recognition AI that makes more errors when processing female speech)
- generation of problematic content, when AI which is used to generate text or images generates content that is problematic.
We need to be aware of such biases in the existing systems that the general population uses and attempt to fix these systems before they cause unwanted harm.
Our first goal in this project is to study the gender bias in AI systems relevant to Slovakia and Slovak language. We will publish a comprehensive report about our findings. We will gather data from human participants, and we will methodically evaluate and audit selected AI systems.
The data alone will be a viable contribution of this project. The data could be used by other researchers and AI practitioners in the future to measure how biased their AI systems are or even attempt to “debias” them.
Our second goal is raising public awareness about this issue, since many people in Slovakia are not aware of how often they use AI and what are the inherent risks of using it. We will communicate with both the general public and the expert communities. By raising awareness and disseminating knowledge to various experts, we will lay foundations for further development of AI systems that do not exhibit problematic behavior. This might increase the quality of life for the people and improve the democratic and civil society in general.
We believe that AI systems should be regarded as socio-technical in their essence and cannot be understood as mere technologies without considering their embeddedness in the society.
Štefan Oreško, Researcher
Kempelen Institute of Intelligent Technologies
Project team
Matúš Pikuliak
Research Consultant 10/2022-01/2024
Juraj Podroužek
Lead and Researcher
Marián Šimko
Lead and Researcher
Štefan Oreško
Researcher 03/2022-10/2024
Matúš Mesarčík
Ethics and Law Specialist
Adrián Gavorník
Research Intern