{"id":33710,"date":"2024-09-05T16:14:33","date_gmt":"2024-09-05T14:14:33","guid":{"rendered":"https:\/\/kinit.sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/"},"modified":"2025-06-27T11:11:46","modified_gmt":"2025-06-27T09:11:46","slug":"survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness","status":"publish","type":"post","link":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/","title":{"rendered":"Survey paper accepted to ACM Computing Surveys &#8211; Addressing sensitivity of language models to effects of randomness"},"content":{"rendered":"<div id=\"\" class=\"element core-paragraph\">\n<p>Did you ever try to replicate the results of a specific machine learning study, but often found different performance numbers and findings to the ones observed in the official study? Or have you ever tried to determine which model can be considered state-of-the-art for a specific task, but found that many studies report contradictory findings in this regard? Or did you ever try the newest method for which everyone claims it leads to a significantly better performance, expecting it could help you progress on your research problem, but only found that it actually underperforms a simple baseline?<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>A common culprit that significantly contributes to all of these problems is uncontrolled randomness in the training and evaluation process. Especially the approaches for dealing with limited labelled data (but to a certain extent also neural networks in general), such as in-context learning, fine-tuning, parameter-efficient fine-tuning or meta-learning, were identified to be sensitive to the effects of uncontrolled randomness. Take for example, in-context learning, where such a simple thing as changing the set of in-context examples, or the order in which the in-context samples are presented to the model can determine whether we get state-of-the-art predictions or random guessing. Similarly, repeating the fine-tuning process multiple times can lead to large deviation in performance, where in some cases the smaller variants can outperform their larger counterparts.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>This uncontrolled randomness, if not properly addressed, was identified to lead to negative consequences. In comparisons and benchmarks, changing only the random seed or using a different prompt format may lead to completely different model rankings. It may also prohibit an objective comparison between different models, creating an imaginary perception of research progress (due to unintentional cherry-picking), or making the research unreproducible. However, even though the effects of randomness can have significant impact, the focus on addressing them is limited in its extent, mainly when dealing with a limited number of labels.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>In our newest paper entitled <a href=\"https:\/\/kinit.sk\/publication\/survey-on-stability-of-learning-with-limited-labelled-data\/\"><em>A Survey on Stability <\/em><\/a><em><a href=\"https:\/\/kinit.sk\/publication\/survey-on-stability-of-learning-with-limited-labelled-data\/\" target=\"_blank\" rel=\"noreferrer noopener\">of<\/a><\/em><a href=\"https:\/\/kinit.sk\/publication\/survey-on-stability-of-learning-with-limited-labelled-data\/\"><em> Learning with Limited Labelled Data and its Sensitivity to the Effects of Randomness<\/em><\/a>, which was accepted to the prestigious <a href=\"https:\/\/dl.acm.org\/journal\/csur\" target=\"_blank\" rel=\"noreferrer noopener\">ACM Computing Surveys<\/a> journal, we provide a comprehensive survey of 415 papers that address the effects of randomness. First, we provide an overview of all the possible sources of randomness in the training (e.g., randomness factors), such as initialisation, data choice or data order, that may lead to lower stability of the learned models. Second, we focus on all tasks for addressing the effects of randomness \u2013 investigation of the impact of different factors is determined across different approaches for learning strategies; determining the underlying origin of the randomness, such the problem of underspecification; and finally the mitigation of the effects, where the impact is reduced, increasing stability without reducing the overall performance of the models.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Overall we find that the majority focus is on in-context learning with large language models, especially on choosing a set of high-quality in-context examples. However, there are other parts that are getting more and more attention, including:<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-list\">\n<ul class=\"wp-block-list\"><div id=\"\" class=\"element core-list-item\">\n<li>Design of prompt format as it was identified to be the most significant contributor of the variance in the results<\/li>\n<\/div>\n\n<div id=\"\" class=\"element core-list-item\">\n<li>Designing general mitigation strategies by extending ensembling strategy, making it more efficient<\/li>\n<\/div>\n\n<div id=\"\" class=\"element core-list-item\">\n<li>Considering sensitivity to the effects of randomness in comparisons and benchmarks, as small change can lead to completely different rankings<\/li>\n<\/div><\/ul>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>However, there are still many areas that are left underexplored, such as:<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-list\">\n<ul class=\"wp-block-list\"><div id=\"\" class=\"element core-list-item\">\n<li>More in-depth analysis of the randomness factors and their importance that would allow for better comparison across different experimental setup<\/li>\n<\/div>\n\n<div id=\"\" class=\"element core-list-item\">\n<li>Exploring how the interactions between randomness factors and the systematic choices affect the importance of the factors<\/li>\n<\/div>\n\n<div id=\"\" class=\"element core-list-item\">\n<li>Sensitivity of the parameter-efficient fine-tuning methods and its mitigation<\/li>\n<\/div><\/ul>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Finally, we provide aggregate findings of our analysis of the different papers, based on which we identify 7 challenges and open problems that provide future directions in this field. The main challenges include the inconsistency in findings, limited in-depth analysis for the effects of randomness, the suboptimal experimental setup that disregards the effects of systematic choices.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>The purpose of this survey is to emphasize the importance of the research area, as it has so far not received adequate attention. First, it should serve existing or new-coming researchers in this field to support their research. At the same time, its purpose is also to inform research and practitioners utilizing learning with limited labeled data about the consequences of unaddressed randomness and how to effectively prevent and deal with them. We hope this survey will help researchers more effectively understand the negative effects of randomness, the tasks performed when dealing with them, grasp its core challenges and better focus the attention to addressing the randomness and the open problems so that the field can be advanced. Finally, we believe that this survey will allow future works to determine and compare how the area is continuously advancing and evolving.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>For more detailed information and findings, <a href=\"https:\/\/kinit.sk\/publication\/survey-on-stability-of-learning-with-limited-labelled-data\/\">please check out our paper<\/a>.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>We would like to thank the EU-funded projects <a href=\"https:\/\/tailor-network.eu\/\" target=\"_blank\" rel=\"noreferrer noopener\">TAILOR<\/a>, <a href=\"https:\/\/disai.eu\/\" target=\"_blank\" rel=\"noreferrer noopener\">DisAI <\/a>and <a href=\"https:\/\/www.veraai.eu\/\" target=\"_blank\" rel=\"noreferrer noopener\">vera.ai<\/a> for fundings this research!<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Did you ever try to replicate the results of a specific machine learning study, but often found different performance numbers and findings to the ones observed in the official study?&#8230;<\/p>\n","protected":false},"author":26,"featured_media":33421,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[88,442],"tags":[175,435],"class_list":["post-33710","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-pop-science-sk","category-2024-sk","tag-conference-sk","tag-language-model-sk"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Survey paper accepted to ACM Computing Surveys - Addressing sensitivity of language models to effects of randomness - KInIT<\/title>\n<meta name=\"description\" content=\"Did you ever try to replicate the results of a specific machine learning study, but often found different performance numbers and findings to the ones observed in the official study? Or have you ever tried to determine which model can be considered state-of-the-art for a specific task, but found that many studies report contradictory findings in this regard?\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/\" \/>\n<meta property=\"og:locale\" content=\"sk_SK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Survey paper accepted to ACM Computing Surveys - Addressing sensitivity of language models to effects of randomness - KInIT\" \/>\n<meta property=\"og:description\" content=\"Did you ever try to replicate the results of a specific machine learning study, but often found different performance numbers and findings to the ones observed in the official study? Or have you ever tried to determine which model can be considered state-of-the-art for a specific task, but found that many studies report contradictory findings in this regard?\" \/>\n<meta property=\"og:url\" content=\"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/\" \/>\n<meta property=\"og:site_name\" content=\"KInIT\" \/>\n<meta property=\"article:published_time\" content=\"2024-09-05T14:14:33+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-27T09:11:46+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/kinit.sk\/wp-content\/uploads\/2024\/09\/202409_Brano_blog_feature-img.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"628\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Marianna Palkova\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@kinit\" \/>\n<meta name=\"twitter:site\" content=\"@kinit\" \/>\n<meta name=\"twitter:label1\" content=\"Autor\" \/>\n\t<meta name=\"twitter:data1\" content=\"Marianna Palkova\" \/>\n\t<meta name=\"twitter:label2\" content=\"Predpokladan\u00fd \u010das \u010d\u00edtania\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 min\u00faty\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\\\/\"},\"author\":{\"name\":\"Marianna Palkova\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/#\\\/schema\\\/person\\\/8b175aaaf3267b5bbbbb97e4a6db8cea\"},\"headline\":\"Survey paper accepted to ACM Computing Surveys &#8211; Addressing sensitivity of language models to effects of randomness\",\"datePublished\":\"2024-09-05T14:14:33+00:00\",\"dateModified\":\"2025-06-27T09:11:46+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\\\/\"},\"wordCount\":847,\"image\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2024\\\/09\\\/202409_Brano_blog_feature-img.png\",\"keywords\":[\"conference\",\"language model\"],\"articleSection\":[\"Pop science\",\"2024\"],\"inLanguage\":\"sk-SK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\\\/\",\"url\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\\\/\",\"name\":\"Survey paper accepted to ACM Computing Surveys - Addressing sensitivity of language models to effects of randomness - KInIT\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2024\\\/09\\\/202409_Brano_blog_feature-img.png\",\"datePublished\":\"2024-09-05T14:14:33+00:00\",\"dateModified\":\"2025-06-27T09:11:46+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/#\\\/schema\\\/person\\\/8b175aaaf3267b5bbbbb97e4a6db8cea\"},\"description\":\"Did you ever try to replicate the results of a specific machine learning study, but often found different performance numbers and findings to the ones observed in the official study? Or have you ever tried to determine which model can be considered state-of-the-art for a specific task, but found that many studies report contradictory findings in this regard?\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\\\/#breadcrumb\"},\"inLanguage\":\"sk-SK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/kinit.sk\\\/sk\\\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"sk-SK\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\\\/#primaryimage\",\"url\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2024\\\/09\\\/202409_Brano_blog_feature-img.png\",\"contentUrl\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2024\\\/09\\\/202409_Brano_blog_feature-img.png\",\"width\":1200,\"height\":628},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/uvod\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Pop science\",\"item\":\"https:\\\/\\\/kinit.sk\\\/category\\\/pop-science\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Survey paper accepted to ACM Computing Surveys &#8211; Addressing sensitivity of language models to effects of randomness\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/#website\",\"url\":\"https:\\\/\\\/kinit.sk\\\/\",\"name\":\"KInIT\",\"description\":\"Vyu\u017e\u00edvame v\u00fdskum pre \u013eud\u00ed a priemysel\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/kinit.sk\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"sk-SK\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/#\\\/schema\\\/person\\\/8b175aaaf3267b5bbbbb97e4a6db8cea\",\"name\":\"Marianna Palkova\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Survey paper accepted to ACM Computing Surveys - Addressing sensitivity of language models to effects of randomness - KInIT","description":"Did you ever try to replicate the results of a specific machine learning study, but often found different performance numbers and findings to the ones observed in the official study? Or have you ever tried to determine which model can be considered state-of-the-art for a specific task, but found that many studies report contradictory findings in this regard?","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/","og_locale":"sk_SK","og_type":"article","og_title":"Survey paper accepted to ACM Computing Surveys - Addressing sensitivity of language models to effects of randomness - KInIT","og_description":"Did you ever try to replicate the results of a specific machine learning study, but often found different performance numbers and findings to the ones observed in the official study? Or have you ever tried to determine which model can be considered state-of-the-art for a specific task, but found that many studies report contradictory findings in this regard?","og_url":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/","og_site_name":"KInIT","article_published_time":"2024-09-05T14:14:33+00:00","article_modified_time":"2025-06-27T09:11:46+00:00","og_image":[{"width":1200,"height":628,"url":"https:\/\/kinit.sk\/wp-content\/uploads\/2024\/09\/202409_Brano_blog_feature-img.png","type":"image\/png"}],"author":"Marianna Palkova","twitter_card":"summary_large_image","twitter_creator":"@kinit","twitter_site":"@kinit","twitter_misc":{"Autor":"Marianna Palkova","Predpokladan\u00fd \u010das \u010d\u00edtania":"4 min\u00faty"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/#article","isPartOf":{"@id":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/"},"author":{"name":"Marianna Palkova","@id":"https:\/\/kinit.sk\/#\/schema\/person\/8b175aaaf3267b5bbbbb97e4a6db8cea"},"headline":"Survey paper accepted to ACM Computing Surveys &#8211; Addressing sensitivity of language models to effects of randomness","datePublished":"2024-09-05T14:14:33+00:00","dateModified":"2025-06-27T09:11:46+00:00","mainEntityOfPage":{"@id":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/"},"wordCount":847,"image":{"@id":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/#primaryimage"},"thumbnailUrl":"https:\/\/kinit.sk\/wp-content\/uploads\/2024\/09\/202409_Brano_blog_feature-img.png","keywords":["conference","language model"],"articleSection":["Pop science","2024"],"inLanguage":"sk-SK"},{"@type":"WebPage","@id":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/","url":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/","name":"Survey paper accepted to ACM Computing Surveys - Addressing sensitivity of language models to effects of randomness - KInIT","isPartOf":{"@id":"https:\/\/kinit.sk\/#website"},"primaryImageOfPage":{"@id":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/#primaryimage"},"image":{"@id":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/#primaryimage"},"thumbnailUrl":"https:\/\/kinit.sk\/wp-content\/uploads\/2024\/09\/202409_Brano_blog_feature-img.png","datePublished":"2024-09-05T14:14:33+00:00","dateModified":"2025-06-27T09:11:46+00:00","author":{"@id":"https:\/\/kinit.sk\/#\/schema\/person\/8b175aaaf3267b5bbbbb97e4a6db8cea"},"description":"Did you ever try to replicate the results of a specific machine learning study, but often found different performance numbers and findings to the ones observed in the official study? Or have you ever tried to determine which model can be considered state-of-the-art for a specific task, but found that many studies report contradictory findings in this regard?","breadcrumb":{"@id":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/#breadcrumb"},"inLanguage":"sk-SK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/"]}]},{"@type":"ImageObject","inLanguage":"sk-SK","@id":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/#primaryimage","url":"https:\/\/kinit.sk\/wp-content\/uploads\/2024\/09\/202409_Brano_blog_feature-img.png","contentUrl":"https:\/\/kinit.sk\/wp-content\/uploads\/2024\/09\/202409_Brano_blog_feature-img.png","width":1200,"height":628},{"@type":"BreadcrumbList","@id":"https:\/\/kinit.sk\/sk\/survey-paper-accepted-to-acm-computing-surveys-addressing-sensitivity-of-language-models-to-effects-of-randomness\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/kinit.sk\/sk\/uvod\/"},{"@type":"ListItem","position":2,"name":"Pop science","item":"https:\/\/kinit.sk\/category\/pop-science\/"},{"@type":"ListItem","position":3,"name":"Survey paper accepted to ACM Computing Surveys &#8211; Addressing sensitivity of language models to effects of randomness"}]},{"@type":"WebSite","@id":"https:\/\/kinit.sk\/#website","url":"https:\/\/kinit.sk\/","name":"KInIT","description":"Vyu\u017e\u00edvame v\u00fdskum pre \u013eud\u00ed a priemysel","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/kinit.sk\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"sk-SK"},{"@type":"Person","@id":"https:\/\/kinit.sk\/#\/schema\/person\/8b175aaaf3267b5bbbbb97e4a6db8cea","name":"Marianna Palkova"}]}},"_links":{"self":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/posts\/33710","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/users\/26"}],"replies":[{"embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/comments?post=33710"}],"version-history":[{"count":1,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/posts\/33710\/revisions"}],"predecessor-version":[{"id":33711,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/posts\/33710\/revisions\/33711"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/media\/33421"}],"wp:attachment":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/media?parent=33710"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/categories?post=33710"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/tags?post=33710"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}