Artificial intelligence and interdisciplinarity

Artificial intelligence (AI) is a term we hear a lot these days. On the one hand, the capabilities of AI systems never cease to amaze us. At the same time, concerns about where AI development will lead us and what it will do to society and our world are growing. Some experts have already expressed their concerns regarding the long-term consequences, but those can only be partially predicted. However, the current issues and challenges concerning AI demand our immediate attention and are beyond mere speculations.

The issues associated with artificial intelligence are not always purely technical; they also encompass a broader societal dimension. Relying solely on technical knowledge, when trying to understand social or ethical concerns, is not always sufficient. Every technology is embedded within a specific sociocultural context that shapes and complicates it in various ways.

At the Kempelen Institute of Intelligent Technologies (KInIT), we acknowledge the increasing need to steer the research and development of AI systems in a socially responsible direction, with the contribution of experts from non-technical fields. In the context of our research and industrial collaborations, we have learned that with an interdisciplinary approach, it is easier to properly grasp and address the broader implications of intelligent technologies, and the inherently non-technical problems they entail.

Interdisciplinary approach means that people from different scientific fields join forces to address a specific problem. Creative collaboration in exploring and developing artificial intelligence commonly occurs between technical and natural sciences. In biochemistry, AI assists in mapping the 3D structures of human proteins, thereby significantly contributing to disease research and enabling more efficient drug design. In astronomy, AI is used in discovering  new objects in the universe. Similarly, in meteorology, AI is harnessed to enhance the precision of weather forecasting.

However, intersections between artificial intelligence, humanities, and social sciences are less known and relatively uncommon. This is unfortunate as it has been proved that such collaborations create substantial benefits. AI plays a crucial role in assisting researchers in these domains as well. For instance, it helps with the analysis and reconstruction of historical texts or facilitates the processing of records from therapeutic sessions. Disciplines such as psychology, sociology, philosophy, ethics, and others can also provide valuable assistance when addressing issues related to artificial intelligence – particularly those concerning human values, societal functioning, and environmental protection. Thanks to these disciplines, we better understand that artificial intelligence technologies should serve humans, not vice versa.

One of the many examples of interdisciplinary collaboration, where humanities and social sciences actively influence technical research, is an ongoing project at KInIT called “Societal Biases in Slovak AI.” The project focuses on the issue of gender biases in AI systems working with the Slovak language. These systems are, for instance, part of language translators like Google Translate, DeepL, Microsoft Translator, or various predictive writing tools that we commonly encounter on our computers, phones, and tablets.

Why is it important to study gender biases in AI systems? Gender biases in language represent statements that assign specific assumed characteristics, traits, suitable professions, roles in life, and more to men and women, based on erroneous generalizations. Such biases may include statements like “women are hysterical and emotional” or “men are disciplined and rational.”

Based on our experiences, we understand that such broad statements are generally inaccurate. However, when these statements frequently appear in language, they strengthen misleading notions about the supposed characteristics and expectations of men and women. But it doesn’t stop there. Since our beliefs often influence our decisions and actions,gender biases can lead to behaviors shaped by ingrained perceptions of men and women, regardless of their validity. This becomes problematic when these beliefs and actions do not align with reality. Gender biases contribute to shaping distorted views of the world, which do not make the lives of men and women better, happier, truthful, or authentic.

Typically, language-processing artificial intelligence systems “learn” to perform given tasks based on a large amount of data. As we previously highlighted, human-generated data naturally encompass various biases. Consequently, these biases are transferred to the AI systems, where they can easily propagate and strengthen prevailing societal biases concerning gender. If we want AI systems to fulfill our expectations, not solely in terms of mathematical and statistical precision, but also based on socially acknowledged principles of fairness and justice, we cannot overlook the importance of ethics alongside mathematics and statistics.

In order to gain a deeper understanding of the issue of gender biases and stereotypes in AI, it has been proven that collaboration with experts specializing in various aspects of this topic is highly beneficial. At KInIT, for example, we have organized an expert workshop that brought together professionals from the fields of natural language processing, ethics, and gender equality experts (with a focus on gender-sensitive language). In these interdisciplinary discussions, knowledge and experiences are shared in intriguing and unconventional combinations. The outcomes of such collaborations might include a range of interdisciplinary-driven guidelines, providing methodological approaches on how to approach a given problem or what to focus on, as well as offer inspiring insights. 

Through our collaboration with gender equality experts, we have undertaken various initiatives. One notable accomplishment is the establishment of a foundational list of gender stereotypes, which we continuously expand in partnership with translators from the academic community and other data contributors. This list contains thousands of examples of problematic sentences, serving as an instrument to evaluate AI systems suspected of exhibiting stereotypical tendencies in translations. However, it’s not just about identifying stereotypes. A similar list can be used in the future to address gender biases in AI systems working with the Slovak language. It is worth noting that conventional bias mitigation methods developed for widely spoken languages may not adequately address the nuances specific to Slovak.

You may perceive these efforts as relatively modest compared to pressing issues like the impact of artificial intelligence on the job market, environment, human creativity, and the functionality of democratic systems. As a society, we need to contemplate the various mid-term and long-term consequences arising from the development and utilization of intelligent technologies. Hence, we dedicate energy and time to research the consequences of generative AI, specifically regarding the production and spreading of disinformation, as interdisciplinarity holds immense significance in this domain. However, delving deeper into this subject deserves a separate article. Nevertheless, it is essential to mention the rule which applies to AI systems – that our future with them will be shaped by the measures we make today.