Martin Tamajka for Innovatrics Trust Report

“If we are to trust AI in courtrooms, it needs to justify its decisions.”

Martin Tamajka

Artificial intelligence (AI) is reshaping various industries, bringing transformative changes to the way we work. Yet, concerns linger when applying AI in critical domains like medicine and law, where lives and futures are at stake. 

These are some of the key points that our partner Innovatrics discussed with Martin Tamajka in an interview for their online magazine Trust Report

The concept of explainable AI and its operational framework is also main topic of the Trust Report interview. Read the article and discover:

  • What are the concerns that emerge from the application of AI in critical sectors such as medicine and law?
  • What are Martin’s insights regarding the significance of AI not just delivering answers but also providing explanations for its decision-making?
  • How can we ensure algorithm explanations are comprehensible to individuals with varying levels of education, age, and familiarity?
  • What are the core complexities in defining metrics to measure the quality of AI explanations, and how do these metrics influence decision-making?
  • How does KInIT’s AutoXAI approach address the challenge of selecting the most appropriate explainability algorithm for a given problem?

Together with Innovatrics, we also organize the Better AI Meetup. If you want to learn more about Martin’s work, he had a talk at one of the meetups. He shared his research on identifying the most suitable explainability algorithm that provides good explanations for a given model, task, and data. You can watch the recording here.