{"id":21585,"date":"2022-09-23T12:22:58","date_gmt":"2022-09-23T10:22:58","guid":{"rendered":"https:\/\/kinit.sk\/project\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/"},"modified":"2023-09-27T16:02:04","modified_gmt":"2023-09-27T14:02:04","slug":"explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp","status":"publish","type":"project","link":"https:\/\/kinit.sk\/sk\/projekt\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/","title":{"rendered":"Explainable AI: theory and a method for finding good explanations for (not only) NLP"},"content":{"rendered":"<div id=\"\" class=\"element core-heading\">\n<h5 class=\"wp-block-heading\">In two phases of this project, we addressed the problem of finding a good post-hoc explainability algorithm for the task at hand. First, we researched the theory behind what\u2019s a good explanation. Then, we proposed the concept of AutoXAI for finding a well performing explanation algorithm for a combination of model, task and data. We conducted a series of experiments on three different tasks with a particular explainability algorithm &#8211; Layer-wise relevance propagation (LRP).<\/h5>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>From the perspective of machine learning (ML), we live in happy times. For many tasks we know not one, but many different ML algorithms or models we can select from and achieve at least a decent performance. This wealth of models and their variations introduces a challenge \u2013 we need to find such configuration that fits our task and data.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>To find the right model, we need to define the criteria that measure how well a particular model and its parameters and hyperparameters fit the problem at hand. Then, we usually do some kind of hyperparameter optimization or Automated Machine Learning (AutoML) [1].<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>In recent years, the number of post-hoc XAI methods became similarly overwhelming like the number of different machine learning methods. To find a post-hoc explainability algorithm that provides good explanations for the task at hand, we can borrow the concepts from AutoML. Like in AutoML, we have a space of available algorithms and their configurations, and we want to find the one that provides good explanations. <strong>The challenging part of AutoXAI is how to compare different explainability algorithms<\/strong>. In other words \u2013 what\u2019s a good explanation?<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>According to multiple authors, a good explanation should balance between two properties \u2013 <strong>it should faithfully describe a model&#8217;s behavior and be understandable for humans<\/strong>.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-image\">\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" data-src=\"https:\/\/lh5.googleusercontent.com\/znATobmFh_1xGo0xVM9tp9TNT3aC1gvCwPhHPyGcgQYdEDMuKBEJvTsKWUowtFxT_az76mmoeRzSonCGOKYx2e6N-cRJj9er8AEyLusyfVTdEcXrgFFzP9Mgpm7UGXSGLpAIKZtit_VMG6VrKdsi13vRR3B7mJ6LmFViWsvn5bbZQ8VgiqNxeQY4XA\" alt=\"\" style=\"--smush-placeholder-width: 800px; --smush-placeholder-aspect-ratio: 800\/432;width:800px;height:432px\" width=\"800\" height=\"432\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" \/><figcaption class=\"wp-element-caption\"><strong>Figure 1:<\/strong> <em>A good explanation should balance between understandability and fidelity. This picture depicts two explanations in the form of a heatmap generated for the same prediction &#8211; the model classified the image as \u201cparrot\u201d. In the top picture, the explanation highlights a limited number of well bounded regions. These regions are, according to the explanation, responsible for the prediction of the model. The explanation looks pleasing, but, in fact, the prediction was significantly influenced by one more region. On the other hand, the explanation below might better describe behavior of the model, but it\u2019s overwhelming.<\/em><\/figcaption><\/figure>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>We proposed a definition of AutoXAI as an optimization problem. <strong>Through optimization, we want to find an explainability algorithm that maximizes two sets of criteria &#8211; understandability and fidelity<\/strong>. These criteria measure the quality of explanations with respect to the underlying model and data.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>The first set of criteria, understandability, measures how similar the explanations generated by the explainability algorithm for predictions made by the model are to the explanations that the user considers understandable.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>The second set of criteria, fidelity, ensure that the explanations truly reflect the decision-making process of the model.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-image\">\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" data-src=\"https:\/\/lh4.googleusercontent.com\/pUcOFlSKDLsehFw4HrIbhdupgzAkDibybVPaI8vXH7lZTsuZd6GcLnB5TOUT3DTq8MzJjtXJyUXl4NJ9Ro9vz-kJertajjrHLuf6OeAigi6wOKjogSvXXFIQF12l-9bYYC3Z1a87OPA-PcBVyAby5cVS_86iMWVJAH0mViyZv02gprPjipxMOMloIA\" alt=\"\" style=\"--smush-placeholder-width: 803px; --smush-placeholder-aspect-ratio: 803\/360;width:803px;height:360px\" width=\"803\" height=\"360\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" \/><figcaption class=\"wp-element-caption\"><strong>Figure 2:<\/strong> <em>AutoXAI as an optimization problem. We want to find a configuration of an explanation algorithm that provides both understandable and faithful explanations for the problem at hand.<\/em><\/figcaption><\/figure>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><strong>We conducted three experiments on three different classification tasks<\/strong>. In two tasks, we classified images from magnetic resonance as either healthy or not. In the last task, we classified sentiment of short textual reviews. For these we wanted to find a configuration of a particular explainability algorithm \u2013 Layerwise relevance propagation. We proposed three understandability measures that were maximized by using a modified Particle Swarm Optimization in order to obtain understandable explanations.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>The results of the proposed method and of the project were presented at the <a href=\"https:\/\/sites.google.com\/view\/xai2022\" target=\"_blank\" rel=\"noreferrer noopener\">Workshop on Explainable Artificial Intelligence<\/a> at the <a href=\"https:\/\/ijcai-22.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">International joint conference on artificial intelligence (IJCAI) 2022<\/a> in Vienna and at a <a href=\"https:\/\/drive.google.com\/file\/d\/10_kiK7yrF_jO_AYgYMDiCIHroT50cSTt\/view?usp=sharing\" target=\"_blank\" rel=\"noreferrer noopener\">public seminar organized by KInIT<\/a>. Proceedings from the conference workshop can be found <a href=\"https:\/\/drive.google.com\/file\/d\/1TULeerUPQz2bIbKiyPMPtCm02G6lnr7-\/view?usp=sharing\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a>.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-heading\">\n<h3 class=\"wp-block-heading\">AutoXAI: Automated Explainable Artificial intelligence<\/h3>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Complex models and especially deep neural networks have introduced unprecedented performance improvements in many tasks.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Despite that, due to their complexity, these models and their decisions tend to be difficult to understand and are perceived as black boxes. Increasing transparency of black box models is addressed by <a href=\"https:\/\/kinit.sk\/sk\/research\/explainable-artificial-intelligence\/\">Explainable Artificial Intelligence (XAI)<\/a>. While optimizing a model to solve a task requires non-trivial effort, finding the right human-understandable explanation of a prediction adds another level of complexity to the whole process.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>In recent years, the landscape of post-hoc methods in XAI has expanded exponentially, paralleling the growth of diverse machine learning techniques. To select a post-hoc explainability algorithm that yields meaningful explanations for a given task, benefiting the end user, we can draw inspiration from the principles of automated machine learning (AutoML) [1].<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Arguably, it is not possible to cover all requirements on XAI with one explainability algorithm &#8211; different audiences, models, tasks and data require different explanations. The explanations must also both faithfully explain the predictions and be understandable for the audience. These two properties are in the literature often derived from two components of explainability and explanations &#8211; fidelity (or faithfulness) and understandability (or interpretability).<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>The global goal of the project is to design a way to find such explanations of artificial intelligence decisions that will be faithful and at the same time beneficial to humans. We will build on top of our previous research, part of which was published in the <a href=\"https:\/\/sites.google.com\/view\/xai2022\" target=\"_blank\" rel=\"noreferrer noopener\">IJCAI-ECAI 2022 workshop dedicated to XAI<\/a>.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>In the first step, we will focus on researching current ways of measuring the quality of explanations. This is still an open research problem, within which the challenge is, among other things, that different types of explanations of the decisions of AI models can be measured in different ways. In addition, it will be necessary to take into account not only how accurately the given metrics describe the quality of the explanations from the point of view of their fidelity (how well they describe the behavior of the model itself), but also how and whether they reflect the usefulness of the explanations for the humans. In this project, we will focus on relevance attribution methods.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>In the second step, we propose a method that finds the best explainability method for the task among a number of different explainability methods and their various settings. In doing so, we will use the metrics we identified in the previous step.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>To verify the method, we propose a series of experiments in which we will confront the solutions found by the proposed method with how the quality of explanations is evaluated by the addressees themselves &#8211; people.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>The proposed verification experiments are based on a claim matching task. We will verify to what extent the explanations provided by the explainability algorithms help end users to assess whether a certain disinformation claim was correctly identified in a social media post by a language model. The hypothesis is that the explanations generated by an explainability algorithm optimized through our proposed method should be more helpful to humans with this task than algorithms that achieved lower quality of explanations.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-heading\">\n<h3 class=\"wp-block-heading\">Popularization of Explainable AI<\/h3>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>According to the number of papers related to Explainable AI published in recent years, it is clear that this topic drew attention in the scientific community. However, popularization and promotion of Explainable AI in the industry and general public is equally important.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Based on knowledge acquired in our own research and study of relevant scientific literature, we prepared a <a href=\"https:\/\/kinit.sk\/sk\/research\/explainable-artificial-intelligence\/\">series of five popularization articles<\/a>. We covered various topics, from <a href=\"https:\/\/kinit.sk\/sk\/how-does-artificial-intelligence-think\/\">general description of Explainable AI<\/a> to <a href=\"https:\/\/kinit.sk\/sk\/ako-merat-kvalitu-vysvetlenia-predpovede-umelej-inteligencie\/\">measuring quality of explanations<\/a> obtained by using different methods.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><br>Selected aspects of Explainable AI were presented in a technical talk at the <a href=\"https:\/\/betteraimeetup.com\/event\/safer-future-of-ai-cybersecurity-equilibrium-and-explainability-of-deep-learning-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">Better AI Meetup on November 9th, 2022<\/a>. You can watch the recording here:<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-embed wp-embed-aspect-16-9 wp-has-aspect-ratio\">\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe title=\"Vol. 5: Safer Future of AI\" width=\"500\" height=\"281\" data-src=\"https:\/\/www.youtube.com\/embed\/-CVE1YUCAbE?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n<\/div>\n\n<div id=\"news-blogs\" class=\"element acf-loop squares columns-3\"><section id=\"news-blogs\" class=\"loop old  squares columns-3\" style=\"background-color: #ebf7f9;\" >\n\t<div class=\"wrapper-out\">\n\t\t<div class=\"wrapper-in\">\n\t\t\t<div class=\"in cf\">\n\t\t\t\t<div class=\"element-inner\">\n\t\t\t\t\t<div class=\"headline\"><h2>Explainable Artificial Intelligence: From Black Boxes to Transparent Models<\/h2><\/div>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"loop-wrap loop-Any loop-style1 loop-load_more loop-6 cf\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<article id=\"loop-item-16252\" class=\"loop-item\">\n\t\t\t\t\t\t\t\t\t<div class=\"inner cf\">\n\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/kinit.sk\/sk\/how-does-artificial-intelligence-think\/\">\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"picture\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"320\" height=\"200\" data-src=\"https:\/\/kinit.sk\/wp-content\/uploads\/2022\/04\/A1.2-Feautre-Web-post-XAI-series-320x200.png\" class=\"attachment-thumbnail size-thumbnail wp-post-image lazyload\" alt=\"\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 320px; --smush-placeholder-aspect-ratio: 320\/200;\" \/>\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"content cf\">\n\t\t\t\t\t\t\t\t\t\t\t<div class=\"content-inner\">\n\t\t\t\t\t\t\t\t\t\t\t\t<h3>Ako rozm\u00fd\u0161\u013ea umel\u00e1 inteligencia?<\/h3>\n                        \t\t\t\t\t\t\t\t\t\t\t\t<div class=\"meta cf\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"date\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tapr 28. 2022\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"excerpt\">Vysvetlite\u013en\u00e1 umel\u00e1 inteligencia: od \u010diernych skriniek k transparentn\u00fdm modelom M\u00f4\u017ee umel\u00e1 inteligencia rozhodova\u0165 o tom, kto bude prepusten\u00fd z v\u00e4zenia, alebo kto post\u00fapi do \u010fal\u0161ieho kola pracovn\u00fdch pohovorov? M\u00f4\u017ee odpor\u00fa\u010da\u0165&#8230;<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t<\/a>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/article>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<article id=\"loop-item-17523\" class=\"loop-item\">\n\t\t\t\t\t\t\t\t\t<div class=\"inner cf\">\n\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/kinit.sk\/sk\/tri-prinosy-vysvetlitelnej-umelej-inteligencie\/\">\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"picture\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"320\" height=\"200\" data-src=\"https:\/\/kinit.sk\/wp-content\/uploads\/2022\/05\/kinit-xai-part-2-explainable-ai-320x200.png\" class=\"attachment-thumbnail size-thumbnail wp-post-image lazyload\" alt=\"\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 320px; --smush-placeholder-aspect-ratio: 320\/200;\" \/>\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"content cf\">\n\t\t\t\t\t\t\t\t\t\t\t<div class=\"content-inner\">\n\t\t\t\t\t\t\t\t\t\t\t\t<h3>Tri pr\u00ednosy vysvetlite\u013enej umelej inteligencie<\/h3>\n                        \t\t\t\t\t\t\t\t\t\t\t\t<div class=\"meta cf\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"date\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tm\u00e1j 25. 2022\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"excerpt\">Vysvetlite\u013en\u00e1 umel\u00e1 inteligencia: od \u010diernych skriniek k transparentn\u00fdm modelom Umel\u00e1 inteligencia bude \u010doraz v\u00e4\u010d\u0161mi zasahova\u0165 do na\u0161ich \u017eivotov. Ve\u013ea \u013eud\u00ed v\u0161ak umelej inteligencii ned\u00f4veruje a maj\u00fa probl\u00e9m prenecha\u0165 jej aj&#8230;<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t<\/a>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/article>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<article id=\"loop-item-18618\" class=\"loop-item\">\n\t\t\t\t\t\t\t\t\t<div class=\"inner cf\">\n\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/kinit.sk\/sk\/transparentne-modely-vs-modely-typu-cierna-skrinka\/\">\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"picture\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"320\" height=\"200\" data-src=\"https:\/\/kinit.sk\/wp-content\/uploads\/2022\/06\/A3.2-Feautre-Web-post-XAI-series-320x200.png\" class=\"attachment-thumbnail size-thumbnail wp-post-image lazyload\" alt=\"\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 320px; --smush-placeholder-aspect-ratio: 320\/200;\" \/>\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"content cf\">\n\t\t\t\t\t\t\t\t\t\t\t<div class=\"content-inner\">\n\t\t\t\t\t\t\t\t\t\t\t\t<h3>Transparentn\u00e9 modely vs. modely typu \u010dierna skrinka<\/h3>\n                        \t\t\t\t\t\t\t\t\t\t\t\t<div class=\"meta cf\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"date\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tj\u00fan 21. 2022\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"excerpt\">Vysvetlite\u013en\u00e1 umel\u00e1 inteligencia: od \u010diernych skriniek k transparentn\u00fdm modelom M\u00f4\u017ee umel\u00e1 inteligencia rozhodova\u0165 o tom, kto bude prepusten\u00fd z v\u00e4zenia, alebo kto post\u00fapi do \u010fal\u0161ieho kola pracovn\u00fdch pohovorov? M\u00f4\u017ee odpor\u00fa\u010da\u0165&#8230;<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t<\/a>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/article>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<article id=\"loop-item-20690\" class=\"loop-item\">\n\t\t\t\t\t\t\t\t\t<div class=\"inner cf\">\n\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/kinit.sk\/sk\/komponenty-a-vlastnosti-dobrych-vysvetleni-rozhodnuti-modelov-umelej-inteligencie\/\">\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"picture\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"320\" height=\"200\" data-src=\"https:\/\/kinit.sk\/wp-content\/uploads\/2022\/08\/A4.2-Feautre-Web-post-XAI-series-320x200.png\" class=\"attachment-thumbnail size-thumbnail wp-post-image lazyload\" alt=\"\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 320px; --smush-placeholder-aspect-ratio: 320\/200;\" \/>\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"content cf\">\n\t\t\t\t\t\t\t\t\t\t\t<div class=\"content-inner\">\n\t\t\t\t\t\t\t\t\t\t\t\t<h3>Komponenty a vlastnosti dobr\u00fdch vysvetlen\u00ed rozhodnut\u00ed modelov umelej inteligencie<\/h3>\n                        \t\t\t\t\t\t\t\t\t\t\t\t<div class=\"meta cf\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"date\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\taug 16. 2022\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"excerpt\">Vysvetlite\u013en\u00e1 umel\u00e1 inteligencia: od \u010diernych skriniek k transparentn\u00fdm modelom Z poh\u013eadu strojov\u00e9ho u\u010denia (angl. Machine learning, ML) \u017eijeme \u0161\u0165astn\u00e9 \u010dasy. Pre mnoh\u00e9 \u00falohy, ktor\u00e9 sme doteraz nevedeli s pomocou umelej&#8230;<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t<\/a>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/article>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<article id=\"loop-item-21720\" class=\"loop-item\">\n\t\t\t\t\t\t\t\t\t<div class=\"inner cf\">\n\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/kinit.sk\/sk\/ako-merat-kvalitu-vysvetlenia-predpovede-umelej-inteligencie\/\">\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"picture\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"320\" height=\"200\" data-src=\"https:\/\/kinit.sk\/wp-content\/uploads\/2022\/09\/202203_web_news_XAI_Feature_5.jpg-320x200.jpg\" class=\"attachment-thumbnail size-thumbnail wp-post-image lazyload\" alt=\"\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 320px; --smush-placeholder-aspect-ratio: 320\/200;\" \/>\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"content cf\">\n\t\t\t\t\t\t\t\t\t\t\t<div class=\"content-inner\">\n\t\t\t\t\t\t\t\t\t\t\t\t<h3>Ako mera\u0165 kvalitu vysvetlenia predpovede umelej inteligencie?<\/h3>\n                        \t\t\t\t\t\t\t\t\t\t\t\t<div class=\"meta cf\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"date\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tsep 26. 2022\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"excerpt\">Vysvetlite\u013en\u00e1 umel\u00e1 inteligencia: od \u010diernych skriniek k transparentn\u00fdm modelom V minulej \u010dasti seri\u00e1lu sme sa pozreli na to, ak\u00e9 komponenty a vlastnosti by mali ma\u0165 dobr\u00e9 vysvetlenia. Prv\u00fd komponent, zrozumite\u013enos\u0165,&#8230;<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t<\/a>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/article>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<article id=\"loop-item-28366\" class=\"loop-item\">\n\t\t\t\t\t\t\t\t\t<div class=\"inner cf\">\n\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/kinit.sk\/sk\/su-vysvetlenia-o-ktorych-si-myslime-ze-su-dobre-naozaj-dobre\/\">\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"picture\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"320\" height=\"200\" data-src=\"https:\/\/kinit.sk\/wp-content\/uploads\/2023\/09\/202203_web_news_XAI_Feature_6-320x200.jpg\" class=\"attachment-thumbnail size-thumbnail wp-post-image lazyload\" alt=\"\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 320px; --smush-placeholder-aspect-ratio: 320\/200;\" \/>\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"content cf\">\n\t\t\t\t\t\t\t\t\t\t\t<div class=\"content-inner\">\n\t\t\t\t\t\t\t\t\t\t\t\t<h3>S\u00fa vysvetlenia, o ktor\u00fdch si mysl\u00edme, \u017ee s\u00fa dobr\u00e9, naozaj dobr\u00e9?<\/h3>\n                        \t\t\t\t\t\t\t\t\t\t\t\t<div class=\"meta cf\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"date\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tsep 27. 2023\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"excerpt\">Vysvetlite\u013en\u00e1 umel\u00e1 inteligencia: od \u010diernych skriniek k transparentn\u00fdm modelom V predo\u0161l\u00fdch \u010dl\u00e1nkoch sme si predstavili tri ve\u013emi zauj\u00edmav\u00e9 koncepty z oblasti vysvetlite\u013enej umelej inteligencie (vysvetlite\u013en\u00e1 AI alebo XAI): V tomto&#8230;<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t<\/a>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/article>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n<\/div>\n\n<div id=\"\" class=\"element acf-memberquote\"><section class=\"memberquote \" style=\"background-color: #ffffff;\">\n\t<div class=\"wrapper-out\">\n\t\t<div class=\"in cf\">\n\t\t\t<div class=\"inner cf\">\n                        \t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n<\/div>\n\n<div id=\"\" class=\"element acf-members\">\n<section class=\"members \" >\n\t<div class=\"wrapper-out\">\n\t\t<div class=\"wrapper-in\">\n\t\t\t<div class=\"in cf\">\n\t\t\t\t<div class=\"element-inner\">\n\t\t\t\t\t<div class=\"headline\"><h2>Project team<\/h2><\/div>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"members-wrap members-grid members-grid-4\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"member\">\n\t\t\t\t\t\t\t\t<div class=\"inner\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"picture\">\n\t\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/kinit.sk\/sk\/clen\/martin-tamajka\/\"><img decoding=\"async\" data-src=\"https:\/\/kinit.sk\/wp-content\/uploads\/2021\/10\/tamajka-1_51330184409_o-255x341.jpg\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 255px; --smush-placeholder-aspect-ratio: 255\/341;\"><\/a>\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<h5>Martin Tamajka<\/h5>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"position\">Technology Lead<\/div>\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<!--<a href=\"https:\/\/kinit.sk\/sk\/clen\/martin-tamajka\/\" class=\"more\">Viac<\/a>-->\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"member\">\n\t\t\t\t\t\t\t\t<div class=\"inner\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"picture\">\n\t\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/kinit.sk\/sk\/clen\/marcel-vesely\/\"><img decoding=\"async\" data-src=\"https:\/\/kinit.sk\/wp-content\/uploads\/2021\/10\/vesely-web-255x341.png\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 255px; --smush-placeholder-aspect-ratio: 255\/341;\"><\/a>\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<h5>Marcel Vesel\u00fd<\/h5>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"position\">Research Engineer<\/div>\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<!--<a href=\"https:\/\/kinit.sk\/sk\/clen\/marcel-vesely\/\" class=\"more\">Viac<\/a>-->\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"member\">\n\t\t\t\t\t\t\t\t<div class=\"inner\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"picture\">\n\t\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/kinit.sk\/sk\/clen\/marian-simko\/\"><img decoding=\"async\" data-src=\"https:\/\/kinit.sk\/wp-content\/uploads\/2024\/11\/simko-marian-2_51330185179_o-3-255x341.jpg\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 255px; --smush-placeholder-aspect-ratio: 255\/341;\"><\/a>\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<h5>Mari\u00e1n \u0160imko<\/h5>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"position\">Lead and Researcher<\/div>\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<!--<a href=\"https:\/\/kinit.sk\/sk\/clen\/marian-simko\/\" class=\"more\">Viac<\/a>-->\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"member\">\n\t\t\t\t\t\t\t\t<div class=\"inner\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"picture\">\n\t\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/kinit.sk\/sk\/clen\/ivana-benova\/\"><img decoding=\"async\" data-src=\"https:\/\/kinit.sk\/wp-content\/uploads\/2021\/10\/benova-web-255x341.png\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 255px; --smush-placeholder-aspect-ratio: 255\/341;\"><\/a>\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<h5>Ivana Be\u0148ov\u00e1<\/h5>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"position\">AI Specialist<\/div>\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<!--<a href=\"https:\/\/kinit.sk\/sk\/clen\/ivana-benova\/\" class=\"more\">Viac<\/a>-->\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section> \n<\/div>\n\n<div id=\"\" class=\"element core-columns\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\"><div id=\"\" class=\"element core-column\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><div id=\"\" class=\"element core-paragraph\">\n<p><\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><em>The PricewaterhouseCoopers Endowment Fund at the Pontis Foundation supported this project.<\/em><\/p>\n<\/div><\/div>\n<\/div>\n\n<div id=\"\" class=\"element core-column\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><div id=\"\" class=\"element core-image\">\n<figure class=\"wp-block-image size-full is-resized\"><img decoding=\"async\" data-src=\"https:\/\/kinit.sk\/wp-content\/uploads\/2022\/04\/Logo_Pontis_EN_BW1000px.png\" alt=\"\" class=\"wp-image-16223 lazyload\" style=\"--smush-placeholder-width: 100px; --smush-placeholder-aspect-ratio: 100\/100;width:100px;height:100px\" width=\"100\" height=\"100\" data-srcset=\"https:\/\/kinit.sk\/wp-content\/uploads\/2022\/04\/Logo_Pontis_EN_BW1000px.png 1000w, https:\/\/kinit.sk\/wp-content\/uploads\/2022\/04\/Logo_Pontis_EN_BW1000px-300x300.png 300w, https:\/\/kinit.sk\/wp-content\/uploads\/2022\/04\/Logo_Pontis_EN_BW1000px-768x768.png 768w, https:\/\/kinit.sk\/wp-content\/uploads\/2022\/04\/Logo_Pontis_EN_BW1000px-150x150.png 150w\" data-sizes=\"(max-width: 100px) 100vw, 100px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" \/><\/figure>\n<\/div><\/div>\n<\/div><\/div>\n<\/div>\n\n<div id=\"\" class=\"element core-heading\">\n<h4 class=\"wp-block-heading\">References<\/h4>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>[1] Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren. Automated machine learning: methods, systems, challenges. Springer Nature, 2019.<\/p>\n<\/div>","protected":false},"featured_media":21572,"template":"","meta":{"_acf_changed":false,"footnotes":""},"categories":[349,407],"class_list":["post-21585","project","type-project","status-publish","has-post-thumbnail","hentry","category-2022-sk","category-scientific-project-sk"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Explainable AI: theory and a method for finding good explanations for (not only) NLP - KInIT<\/title>\n<meta name=\"description\" content=\"From the perspective of machine learning (ML), we live in happy times. For many tasks we know not one, but many different ML algorithms or models we can select from and achieve at least a decent performance. This wealth of models and their variations introduces a challenge \u2013 we need to find such configuration that fits our task and data.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/kinit.sk\/sk\/projekt\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/\" \/>\n<meta property=\"og:locale\" content=\"sk_SK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Explainable AI: theory and a method for finding good explanations for (not only) NLP - KInIT\" \/>\n<meta property=\"og:description\" content=\"From the perspective of machine learning (ML), we live in happy times. For many tasks we know not one, but many different ML algorithms or models we can select from and achieve at least a decent performance. This wealth of models and their variations introduces a challenge \u2013 we need to find such configuration that fits our task and data.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/kinit.sk\/sk\/projekt\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/\" \/>\n<meta property=\"og:site_name\" content=\"KInIT\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-27T14:02:04+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/kinit.sk\/wp-content\/uploads\/2022\/09\/202208_web_project_PwC_Pontis_Feature.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1500\" \/>\n\t<meta property=\"og:image:height\" content=\"785\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@kinit\" \/>\n<meta name=\"twitter:label1\" content=\"Predpokladan\u00fd \u010das \u010d\u00edtania\" \/>\n\t<meta name=\"twitter:data1\" content=\"4 min\u00faty\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/projekt\\\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\\\/\",\"url\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/projekt\\\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\\\/\",\"name\":\"Explainable AI: theory and a method for finding good explanations for (not only) NLP - KInIT\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/projekt\\\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/projekt\\\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2022\\\/09\\\/202208_web_project_PwC_Pontis_Feature.png\",\"datePublished\":\"2022-09-23T10:22:58+00:00\",\"dateModified\":\"2023-09-27T14:02:04+00:00\",\"description\":\"From the perspective of machine learning (ML), we live in happy times. For many tasks we know not one, but many different ML algorithms or models we can select from and achieve at least a decent performance. This wealth of models and their variations introduces a challenge \u2013 we need to find such configuration that fits our task and data.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/projekt\\\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\\\/#breadcrumb\"},\"inLanguage\":\"sk-SK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/kinit.sk\\\/sk\\\/projekt\\\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"sk-SK\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/projekt\\\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\\\/#primaryimage\",\"url\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2022\\\/09\\\/202208_web_project_PwC_Pontis_Feature.png\",\"contentUrl\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2022\\\/09\\\/202208_web_project_PwC_Pontis_Feature.png\",\"width\":1500,\"height\":785},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/projekt\\\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Scientific project\",\"item\":\"https:\\\/\\\/kinit.sk\\\/category\\\/scientific-project\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Explainable AI: theory and a method for finding good explanations for (not only) NLP\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/#website\",\"url\":\"https:\\\/\\\/kinit.sk\\\/\",\"name\":\"KInIT\",\"description\":\"Vyu\u017e\u00edvame v\u00fdskum pre \u013eud\u00ed a priemysel\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/kinit.sk\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"sk-SK\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Explainable AI: theory and a method for finding good explanations for (not only) NLP - KInIT","description":"From the perspective of machine learning (ML), we live in happy times. For many tasks we know not one, but many different ML algorithms or models we can select from and achieve at least a decent performance. This wealth of models and their variations introduces a challenge \u2013 we need to find such configuration that fits our task and data.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/kinit.sk\/sk\/projekt\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/","og_locale":"sk_SK","og_type":"article","og_title":"Explainable AI: theory and a method for finding good explanations for (not only) NLP - KInIT","og_description":"From the perspective of machine learning (ML), we live in happy times. For many tasks we know not one, but many different ML algorithms or models we can select from and achieve at least a decent performance. This wealth of models and their variations introduces a challenge \u2013 we need to find such configuration that fits our task and data.","og_url":"https:\/\/kinit.sk\/sk\/projekt\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/","og_site_name":"KInIT","article_modified_time":"2023-09-27T14:02:04+00:00","og_image":[{"width":1500,"height":785,"url":"https:\/\/kinit.sk\/wp-content\/uploads\/2022\/09\/202208_web_project_PwC_Pontis_Feature.png","type":"image\/png"}],"twitter_card":"summary_large_image","twitter_site":"@kinit","twitter_misc":{"Predpokladan\u00fd \u010das \u010d\u00edtania":"4 min\u00faty"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/kinit.sk\/sk\/projekt\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/","url":"https:\/\/kinit.sk\/sk\/projekt\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/","name":"Explainable AI: theory and a method for finding good explanations for (not only) NLP - KInIT","isPartOf":{"@id":"https:\/\/kinit.sk\/#website"},"primaryImageOfPage":{"@id":"https:\/\/kinit.sk\/sk\/projekt\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/#primaryimage"},"image":{"@id":"https:\/\/kinit.sk\/sk\/projekt\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/#primaryimage"},"thumbnailUrl":"https:\/\/kinit.sk\/wp-content\/uploads\/2022\/09\/202208_web_project_PwC_Pontis_Feature.png","datePublished":"2022-09-23T10:22:58+00:00","dateModified":"2023-09-27T14:02:04+00:00","description":"From the perspective of machine learning (ML), we live in happy times. For many tasks we know not one, but many different ML algorithms or models we can select from and achieve at least a decent performance. This wealth of models and their variations introduces a challenge \u2013 we need to find such configuration that fits our task and data.","breadcrumb":{"@id":"https:\/\/kinit.sk\/sk\/projekt\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/#breadcrumb"},"inLanguage":"sk-SK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/kinit.sk\/sk\/projekt\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/"]}]},{"@type":"ImageObject","inLanguage":"sk-SK","@id":"https:\/\/kinit.sk\/sk\/projekt\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/#primaryimage","url":"https:\/\/kinit.sk\/wp-content\/uploads\/2022\/09\/202208_web_project_PwC_Pontis_Feature.png","contentUrl":"https:\/\/kinit.sk\/wp-content\/uploads\/2022\/09\/202208_web_project_PwC_Pontis_Feature.png","width":1500,"height":785},{"@type":"BreadcrumbList","@id":"https:\/\/kinit.sk\/sk\/projekt\/explainable-ai-theory-and-a-method-for-finding-good-explanations-for-not-only-nlp\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/kinit.sk\/sk\/"},{"@type":"ListItem","position":2,"name":"Scientific project","item":"https:\/\/kinit.sk\/category\/scientific-project\/"},{"@type":"ListItem","position":3,"name":"Explainable AI: theory and a method for finding good explanations for (not only) NLP"}]},{"@type":"WebSite","@id":"https:\/\/kinit.sk\/#website","url":"https:\/\/kinit.sk\/","name":"KInIT","description":"Vyu\u017e\u00edvame v\u00fdskum pre \u013eud\u00ed a priemysel","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/kinit.sk\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"sk-SK"}]}},"_links":{"self":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/project\/21585","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/project"}],"about":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/types\/project"}],"version-history":[{"count":14,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/project\/21585\/revisions"}],"predecessor-version":[{"id":28398,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/project\/21585\/revisions\/28398"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/media\/21572"}],"wp:attachment":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/media?parent=21585"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/categories?post=21585"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}