The term bias can be understood in many ways, whether positive or negative. There are many types of biases, like historical, representation or societal bias. We can understand societal biases as a stereotypes that are based on demographic factors or physical characteristics with respect to various aspects like race, ethnicity, gender, sexual orientation, socioeconomic status or education. 

Such stereotypes are often deemed unfair. Unfortunately, the answer to the question whether AI systems could be completely fair is probably negative. This is particularly true for algorithms that learn from data, which can already carry certain biases. 

However, that does not mean that we should accept the situation as it is. We need to better understand the mechanisms behind AI bias and aim to minimise the potential risks before they cause unwanted harm to people.

Useful knowledge is not the only thing artificial intelligence adopts from us. Sometimes it also acquires our biases and prejudices. Unfortunately, we know almost nothing about biases in AI models working with Slovak language. But we are about to change that.

This project is funded by the U.S. Embassy Bratislava from the Small Grants Program.