Meet Juraj Podroužek

The Gamechangers series presents the stories of the people who stood at the very beginning of the Kempelen Institute of Intelligent Technologies. Through their vision and determination, they transformed a bold idea into reality – creating a new place for excellent science connected with innovation and talent development in Slovakia. In the interviews, you will learn about the challenges they had to overcome, the values that unite them, and what motivates them to keep pushing the boundaries of research and innovation.

The third episode introduces Juraj Podroužek, who founded and leads the research team for AI ethics and regulation at the Kempelen Institute. Juraj is the co-founder of the E-tika initiative, a member of the Standing Committee for AI Ethics and Regulation, and the author of several projects on the social impact of technology. He is the guarantor of the ethics theme in AI research and development, which today is one of the key research areas at the Kempelen Institute.

We actually started thinking about ethics right from the very beginning. Or rather, we knew that even in the early stages, research activities and knowledge transfer shouldn’t be directed only toward technical solutions. They should also reflect on some broader ethical and societal questions. These questions concerned either the research we do at KInIT or the direct application of research topics in practice, for example, when collaborating with industry partners. It was something we wanted to have in place from the start, and we were considering how best to make it happen.

We realised fairly quickly that the best approach would be to set up a dedicated research team made up of people not primarily with a technical background, but from the social sciences and humanities. That was the beginning. The idea was to support our own researchers as well as companies – helping them translate the very abstract questions of ethics and regulation into everyday practice. Into what our research teams actually do, but also directly into the processes where AI systems are being developed.

Our goal, then, was not to be philosophers sitting in the corner criticising everyone else, but to become an integral part of research and development teams working on AI.

It was certainly seen as an important part from the very beginning. However, at first, ethics was carried largely by me together with Majka Bieliková, so at that stage, we were mainly looking for the right people who could gradually take on the topic of regulation as well. So in the beginning, it was more about ethics and thinking about how our mission of bringing ethics into AI design could eventually be enriched by a regulatory framework. This became possible thanks to people like Matúš Mesarčík, who joined us after some time and today serves as KInIT’s lead on regulation and law.

I remember a few such challenges. One of them is that we need to find a common language. We need to do it in a way that allows philosophers or lawyers to understand computer scientists, and vice versa.

On the one hand, this means that from our side – the ethics team – we need to grasp at least the basic features of AI solutions, the language that is used, and what matters to researchers or companies. On the other hand, we must also be able to explain what the ethical dimension brings to them and how it affects their work. So first we have to clarify expectations. And this is something we still encounter to this day.

We put a lot of energy into those initial meetings with teams. We talk about how the ethics assessment process will unfold and what the outcome will be. We explain that it’s not about creating codes of conduct, but rather about having in-depth discussions on various ethical, social, and legal risks. In the end, this may result in something like a risk register – similar to how a company develops registers of security or business risks. This way, a list of ethical and social risks can be created, one that teams can continue to work with.

These are, in fact, the biggest challenges – finding a common language and a platform where we can meet and share our know-how in a way that teams will accept and be able to work with. We want it to be more than just a conversation filled with terms they don’t understand or interpret differently – we want it to be a dialogue. And that’s really our greatest challenge: making sure it’s not one-sided. We don’t want to just talk while they listen and later do something with it – we want it to be a living dialogue between both sides.

We understood that we needed to do this in a form that would be very interactive. At least initially, we wanted to avoid having checklists or lists of questions that partners would fill out, only for us to tell them what they were doing right or wrong. Instead, we needed to present topics and explain why they are important. Then, using concrete examples, we show how these issues might relate to their system or research, and we discuss them together.

The flagship of what we do is ethics assessments. A major part of these involves facilitated workshops. These go very deep and aim to ensure that all participants – from the partner’s side or from our research team – actively engage. The goal is not for them to just sit and listen, but to build arguments themselves, give us feedback, and share what they perceive as most important.

We then get to see their learning process, because behind these workshops, there is something that interests us a lot. We call it moral sensitivity, which is the ability to recognise that they are dealing with a moral dilemma or problem, and to understand why it matters to them. It’s also the ability to realise that it affects not only themselves, but others as well—whether users or other types of stakeholders. Equally important is that they can ask themselves, “OK, what do we do about this now?” – taking a step in the right direction, not just identifying a risk, but also mitigating it.

And this is actually something I consider perhaps the most essential, because I see it as the most enduring. When we approach people in a way that allows them to take what we share as their own, it becomes an integral part of how they think about their work. As a result, they can reflect on these issues and notice things they hadn’t seen before – even later, when we are no longer present.

Definitely. Even the methodology we gradually developed for ethics assessments has evolved over time. We realized that some things work well, while others don’t quite hit the mark. For example, we now put a lot of energy into that initial explanation—why we do it, why it matters, and setting expectations about what the company or team will receive at the end.

Another aspect that has changed significantly is the balance between the theoretical and practical parts of the workshops. At first, the workshops had a relatively strong theoretical component. We worked a lot with philosophical concepts such as fairness, justice, transparency, human dignity, and so on. But we found that different types of teams respond differently. With some, we can dive deeper into theory, while others can’t fully process it. With them, we need to focus on very concrete discussions, explaining everything through real situations.

It was a challenge for us to learn how to adapt to different partners and teams, to sense how deeply we need to go into theory, and when to move on to practical examples or exercises. As a result, we gradually developed various activities to actively engage participants—interactive boards, brainstorming exercises, and so on. Essentially, we had to find formats that would make the workshops as interactive as possible.

A major experience was the first workshops with partners. That was the first time we saw it in practice. There’s a big difference between thinking or giving a lecture and actually being directly in interaction.

Another powerful moment was when we started receiving feedback. For example, when a company tells us it was meaningful and, based on our workshop, they are now changing a process or the way they approach certain issues. Or they begin paying attention to something we had highlighted.

I remember, for example, one case where a partner told us that our workshops were perhaps the most important event for them that year. They had the chance to talk among themselves, better understand their product, and clarify many things. So for these people, the workshop often becomes such an opportunity, because we make sure that participants from different backgrounds are involved on both sides.

It’s not just the core AI team developers attending the workshops. There are also people responsible for sales or user interactions, and sometimes even company leadership. Different roles get the chance to confront each other and discuss what matters to whom. These were very powerful moments, when we actually saw the “aha moments,” when participants suddenly realised that something really matters – something they hadn’t considered before – and it significantly changed the way they view what they want to do.

At KInIT, we started building not only on the idea of connecting excellent research with practice, but also on the idea of multidisciplinarity. I believe that connecting different fields and perspectives – both technical and non-technical – is the future of KInIT. It should become a modern institute that excels equally in technical questions as well as in social sciences and humanities in relation to artificial intelligence.

So, for me, this is the future of KInIT: being able to look at questions about intelligent technologies comprehensively and offer expertise from multiple areas and perspectives. A melting pot where different ideas can bubble up and influence each other. That’s a strong and meaningful ambition that I carry forward.