{"id":33706,"date":"2024-09-23T10:52:15","date_gmt":"2024-09-23T08:52:15","guid":{"rendered":"https:\/\/kinit.sk\/dealing-with-sensitivity-of-large-language-models\/"},"modified":"2024-10-07T12:02:44","modified_gmt":"2024-10-07T10:02:44","slug":"dealing-with-sensitivity-of-large-language-models","status":"publish","type":"post","link":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/","title":{"rendered":"Dealing with sensitivity of Large Language Models"},"content":{"rendered":"<div id=\"\" class=\"element core-paragraph\">\n<p>Large language models, such as ChatGPT, have become popular recently and are widely used as assistants for many tasks by many researchers as well as people without extensive AI knowledge. As such, they represent one of the most popular uses of learning with limited labelled data. Large language models allow for more effective work by summarising longer texts, as \u201cdiscussion partners\u201d when coming up with new ideas, generating texts (such as email) based on a few keywords, for a simple planning, or for categorisation\/classification tasks such as determining sentiment.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Despite their popularity and widespread adoption, not many people are aware of their shortcomings and weaknesses. <strong>One of the most significant of such shortcomings is their instability and sensitivity to the effects of randomness, which negatively affects their effectiveness and <\/strong><strong>trustworthiness<\/strong><strong>.&nbsp;<\/strong><\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>This behaviour can be best showcased on the texts or instructions we give to the language model, where we describe what we want it to do \u2013 called \u201cprompts\u201d by AI researchers and practitioners. <strong>How these prompts are written has a significant impact on whether the model will correctly accomplish the task, or will fail completely.<\/strong> As many are aware, using a completely different prompt will lead to a completely different answer from the model. However, even when using the exact same prompt, but replacing only a single word for its synonym, or changing part without semantic meaning (such as punctuation), can also lead to this effect.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>We can showcase this on the task of determining the sentiment of the sentence \u2013 an easy task for people. We have the sentence <em>\u201cThe movie was terrific\u201d<\/em> and two prompts that differ only in one word:<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-list\">\n<ul class=\"wp-block-list\"><div id=\"\" class=\"element core-list-item\">\n<li><em>Determine sentiment of the <\/em><strong><em>following<\/em><\/strong><em> sentence<\/em><\/li>\n<\/div>\n\n<div id=\"\" class=\"element core-list-item\">\n<li><em>Determine sentiment of the <\/em><strong><em>subsequent <\/em><\/strong><em>sentence<\/em><\/li>\n<\/div><\/ul>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Oftentimes, it can happen that the answer for the first prompt will be <em>\u201cpositive\u201d<\/em>, but using the second one it will be <em>\u201cnegative\u201d<\/em> \u2013 although ChatGPT can handle this small change for sentiment, as it was extensively trained for it, for other tasks it can cause problems.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><strong>Besides the wording of the prompts, the order in which we give the instructions and the examples can cause different answers.<\/strong> For example, we have following two prompts that have only the order changed:<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-list\">\n<ul class=\"wp-block-list\"><div id=\"\" class=\"element core-list-item\">\n<li><em>Which one is larger, 13.11 or 13.8?<\/em><\/li>\n<\/div>\n\n<div id=\"\" class=\"element core-list-item\">\n<li><em>13.11 and 13.8, which one is larger?<\/em><\/li>\n<\/div><\/ul>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>For the first prompt, the model will correctly answer that 13.8 is larger. However, if we use the second one (where the instructions are switched, but still make sense) we will get an incorrect answer that 13.11 is larger \u2013 this is something that even ChatGPT still struggles with.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>When trying only a few cases, the problem of sensitivity may not seem as significant or even be obscured (as with the sentiment example). However, it can cause significant problems when opting for more extensive use of the models \u2013 taking the sentiment example into consideration, the difference between whether we will get the correct answer in 9\/10 cases or in 7\/10 can be significant if applying it across hundreds or even thousands of examples.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><strong>So how can we deal with this problem?<\/strong><\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Good news is that <strong>there are multiple possibilities on how to deal with this problem<\/strong> \u2013 although they make the use of the large language models slightly slower and more expensive. Here, we will cover 3 of the most popular ones based on <a href=\"https:\/\/kinit.sk\/publication\/survey-on-stability-of-learning-with-limited-labelled-data\/\" target=\"_blank\" rel=\"noreferrer noopener\">our comprehensive survey of the papers that address the sensitivity<\/a>, which was recently published in a prestigious journal.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><strong><em>Leave nothing to chance, write more complete prompts.<\/em><\/strong><strong> <\/strong>The more details that are included in the prompt, the higher probability that the answer will be what we are looking for. As such, the prompt should include as much information and detail as possible. For example, when we ask the large language model to write an email for us, it may not be enough to only provide it with some keywords it should include. Instead, the prompt should also include things like:<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-list\">\n<ul class=\"wp-block-list\"><div id=\"\" class=\"element core-list-item\">\n<li>Which keywords\/sentences are more important and so should be highlighted more<\/li>\n<\/div>\n\n<div id=\"\" class=\"element core-list-item\">\n<li>What language should be used, formal or informal?&nbsp;<\/li>\n<\/div>\n\n<div id=\"\" class=\"element core-list-item\">\n<li>For whom is the email meant? This will mostly affect how it is worded. For example an instruction such as \u201cword it so that it can be read and understood by elementary school children\u201d works rather well<\/li>\n<\/div><\/ul>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>When preparing the prompts, do not be afraid of iteratively improving it along with the model \u2013 first use a simple prompt, see what the model outputs and then either modify the original prompt or ask it to include additional information (e.g., \u201ccan you also include a paragraph that I would like them to answer as soon as possible?\u201d)<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>In most cases, the length of the prompt will not be an issue. However, what can be an issue is the number of back-and-forth answers are in the conversation already, as the models were shown that they cannot handle that many of them. In case the model either stops including the new instructions or starts to forget the older ones, do not hesitate to create a complete prompt with all the instructions to get it back on track.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><strong>A well written, complete prompt will often deal with the sensitivity to the small changes.<\/strong><\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><strong><em>Use examples wherever possible.<\/em><\/strong><strong> <\/strong>Even though just giving the model instructions works in most cases, showing examples of how it should solve the problem\/task can significantly improve its answers. For example, taking the sentiment task as an example, showing it some sentences that you consider to be positive, neutral or negative can be beneficial. However, an important aspect is how to choose the examples to show \u2013 it was shown that the most informative samples that represent the task the best always bring the most benefit.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><strong>Showing the examples as part of the prompt also deals with the sensitivity<\/strong> as it does not have to work purely based on the instructions, but can also do <strong>a kind of \u201cimitation\u201d<\/strong>.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><strong><em>Ask multiple times with different prompts and combine the answers.<\/em><\/strong><strong> <\/strong>Instead of asking the model a single time with a single prompt, it is often beneficial to create multiple prompts, each with slightly different instructions and wording, and get answers for each of them from the model. <strong>The idea behind this is that the different prompts may lead the model to focus on different parts<\/strong> \u2013 for example when writing an email, one prompt may lead to a better introduction in the email, while other may better highlight the most important issues. However, when combining them into a single prompt, their benefits may not show. As such, leveraging the strengths of different prompts (with different order of instructions or different examples) can be done by repeated questions.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>After getting the answer to each of the prompts, they can be combined together. In the simpler cases, this can be done as a majority voting \u2013 for example in the sentiment task, if we have 10 prompts and in 7 out of 10 cases the model will return positive sentiment, we can say that the sentence indeed has positive sentiment (or neutral when it is 50:50). In the more complicated cases, where a lot of text is generated, there are two solutions. <strong>One possibility is to combine the answers manually<\/strong> \u2013 by choosing parts from each answer and combining them. <strong>A better solution is to do it automatically, by utilising the language model itself<\/strong> \u2013 we can take all the answers and give them to another language model (or the same one that generated them) and ask it to combine them into one. Although this may introduce further problems, the large language models excel at tasks such as summarisation so the issues should be minimal \u2013 especially when using well written and complete prompts.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><strong>Asking the model multiple times and combining the answers is the most effective way to deal with the sensitivity and to guarantee the best possible answers, but also the most expensive one.<\/strong> The most significant strength is that it not only deals with the sensitivity to how the prompt is worded, but also to other factors such as the order of instructions, the examples we are choosing or even the inherent randomness in the model itself.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>For more detailed information and findings, <a href=\"https:\/\/kinit.sk\/publication\/survey-on-stability-of-learning-with-limited-labelled-data\/\">please check out our scientific paper<\/a>.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p><strong>We would like to thank the PwC Endowment Fund at the Pontis Foundation for funding this research!<\/strong><\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p class=\"has-text-align-center\"><a href=\"https:\/\/drive.google.com\/file\/d\/1XX5mDZjINo2c9R9jnQ3PKgaVLZx3El14\/view?usp=drive_link\"><img decoding=\"async\" data-src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXc2ijc1K4LpC9plH8tt8J-yunLuYiAWE_1asEMn4pHAqW6W91fl2c9_G8h1fz3zUFOJ6EqGwkFk86HFOs2QT4k8HZXdTbnE3IUE2erWGBJIt-nekQGDmdo51T046-tfAbniFewJo6Q_BIvYuJP24DH1a-iT?key=EdkoaM-lJKB6KyKaluiZbg\" width=\"89\" height=\"89\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 89px; --smush-placeholder-aspect-ratio: 89\/89;\"><\/a><\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Large language models, such as ChatGPT, have become popular recently and are widely used as assistants for many tasks by many researchers as well as people without extensive AI knowledge&#8230;.<\/p>\n","protected":false},"author":26,"featured_media":31266,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[83,88,442],"tags":[187],"class_list":["post-33706","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-sk","category-pop-science-sk","category-2024-sk","tag-nlp-sk"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Dealing with sensitivity of Large Language Models - KInIT<\/title>\n<meta name=\"description\" content=\"Large language models, such as ChatGPT, have become popular recently and are widely used as assistants for many tasks by many researchers as well as people without extensive AI knowledge. As such, they represent one of the most popular uses of learning with limited labelled data.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/\" \/>\n<meta property=\"og:locale\" content=\"sk_SK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Dealing with sensitivity of Large Language Models - KInIT\" \/>\n<meta property=\"og:description\" content=\"Large language models, such as ChatGPT, have become popular recently and are widely used as assistants for many tasks by many researchers as well as people without extensive AI knowledge. As such, they represent one of the most popular uses of learning with limited labelled data.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/\" \/>\n<meta property=\"og:site_name\" content=\"KInIT\" \/>\n<meta property=\"article:published_time\" content=\"2024-09-23T08:52:15+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-10-07T10:02:44+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/kinit.sk\/wp-content\/uploads\/2024\/02\/202401_projects_features_update_o2.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1201\" \/>\n\t<meta property=\"og:image:height\" content=\"629\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Marianna Palkova\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@kinit\" \/>\n<meta name=\"twitter:site\" content=\"@kinit\" \/>\n<meta name=\"twitter:label1\" content=\"Autor\" \/>\n\t<meta name=\"twitter:data1\" content=\"Marianna Palkova\" \/>\n\t<meta name=\"twitter:label2\" content=\"Predpokladan\u00fd \u010das \u010d\u00edtania\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 min\u00fat\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/dealing-with-sensitivity-of-large-language-models\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/dealing-with-sensitivity-of-large-language-models\\\/\"},\"author\":{\"name\":\"Marianna Palkova\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/#\\\/schema\\\/person\\\/8b175aaaf3267b5bbbbb97e4a6db8cea\"},\"headline\":\"Dealing with sensitivity of Large Language Models\",\"datePublished\":\"2024-09-23T08:52:15+00:00\",\"dateModified\":\"2024-10-07T10:02:44+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/dealing-with-sensitivity-of-large-language-models\\\/\"},\"wordCount\":1378,\"image\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/dealing-with-sensitivity-of-large-language-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/202401_projects_features_update_o2.png\",\"keywords\":[\"nlp\"],\"articleSection\":[\"News\",\"Pop science\",\"2024\"],\"inLanguage\":\"sk-SK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/dealing-with-sensitivity-of-large-language-models\\\/\",\"url\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/dealing-with-sensitivity-of-large-language-models\\\/\",\"name\":\"Dealing with sensitivity of Large Language Models - KInIT\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/dealing-with-sensitivity-of-large-language-models\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/dealing-with-sensitivity-of-large-language-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/202401_projects_features_update_o2.png\",\"datePublished\":\"2024-09-23T08:52:15+00:00\",\"dateModified\":\"2024-10-07T10:02:44+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/#\\\/schema\\\/person\\\/8b175aaaf3267b5bbbbb97e4a6db8cea\"},\"description\":\"Large language models, such as ChatGPT, have become popular recently and are widely used as assistants for many tasks by many researchers as well as people without extensive AI knowledge. As such, they represent one of the most popular uses of learning with limited labelled data.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/dealing-with-sensitivity-of-large-language-models\\\/#breadcrumb\"},\"inLanguage\":\"sk-SK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/kinit.sk\\\/sk\\\/dealing-with-sensitivity-of-large-language-models\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"sk-SK\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/dealing-with-sensitivity-of-large-language-models\\\/#primaryimage\",\"url\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/202401_projects_features_update_o2.png\",\"contentUrl\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/202401_projects_features_update_o2.png\",\"width\":1201,\"height\":629},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/dealing-with-sensitivity-of-large-language-models\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/uvod\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"News\",\"item\":\"https:\\\/\\\/kinit.sk\\\/category\\\/news\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Dealing with sensitivity of Large Language Models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/#website\",\"url\":\"https:\\\/\\\/kinit.sk\\\/\",\"name\":\"KInIT\",\"description\":\"Vyu\u017e\u00edvame v\u00fdskum pre \u013eud\u00ed a priemysel\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/kinit.sk\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"sk-SK\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/#\\\/schema\\\/person\\\/8b175aaaf3267b5bbbbb97e4a6db8cea\",\"name\":\"Marianna Palkova\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Dealing with sensitivity of Large Language Models - KInIT","description":"Large language models, such as ChatGPT, have become popular recently and are widely used as assistants for many tasks by many researchers as well as people without extensive AI knowledge. As such, they represent one of the most popular uses of learning with limited labelled data.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/","og_locale":"sk_SK","og_type":"article","og_title":"Dealing with sensitivity of Large Language Models - KInIT","og_description":"Large language models, such as ChatGPT, have become popular recently and are widely used as assistants for many tasks by many researchers as well as people without extensive AI knowledge. As such, they represent one of the most popular uses of learning with limited labelled data.","og_url":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/","og_site_name":"KInIT","article_published_time":"2024-09-23T08:52:15+00:00","article_modified_time":"2024-10-07T10:02:44+00:00","og_image":[{"width":1201,"height":629,"url":"https:\/\/kinit.sk\/wp-content\/uploads\/2024\/02\/202401_projects_features_update_o2.png","type":"image\/png"}],"author":"Marianna Palkova","twitter_card":"summary_large_image","twitter_creator":"@kinit","twitter_site":"@kinit","twitter_misc":{"Autor":"Marianna Palkova","Predpokladan\u00fd \u010das \u010d\u00edtania":"7 min\u00fat"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/#article","isPartOf":{"@id":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/"},"author":{"name":"Marianna Palkova","@id":"https:\/\/kinit.sk\/#\/schema\/person\/8b175aaaf3267b5bbbbb97e4a6db8cea"},"headline":"Dealing with sensitivity of Large Language Models","datePublished":"2024-09-23T08:52:15+00:00","dateModified":"2024-10-07T10:02:44+00:00","mainEntityOfPage":{"@id":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/"},"wordCount":1378,"image":{"@id":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/#primaryimage"},"thumbnailUrl":"https:\/\/kinit.sk\/wp-content\/uploads\/2024\/02\/202401_projects_features_update_o2.png","keywords":["nlp"],"articleSection":["News","Pop science","2024"],"inLanguage":"sk-SK"},{"@type":"WebPage","@id":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/","url":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/","name":"Dealing with sensitivity of Large Language Models - KInIT","isPartOf":{"@id":"https:\/\/kinit.sk\/#website"},"primaryImageOfPage":{"@id":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/#primaryimage"},"image":{"@id":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/#primaryimage"},"thumbnailUrl":"https:\/\/kinit.sk\/wp-content\/uploads\/2024\/02\/202401_projects_features_update_o2.png","datePublished":"2024-09-23T08:52:15+00:00","dateModified":"2024-10-07T10:02:44+00:00","author":{"@id":"https:\/\/kinit.sk\/#\/schema\/person\/8b175aaaf3267b5bbbbb97e4a6db8cea"},"description":"Large language models, such as ChatGPT, have become popular recently and are widely used as assistants for many tasks by many researchers as well as people without extensive AI knowledge. As such, they represent one of the most popular uses of learning with limited labelled data.","breadcrumb":{"@id":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/#breadcrumb"},"inLanguage":"sk-SK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/"]}]},{"@type":"ImageObject","inLanguage":"sk-SK","@id":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/#primaryimage","url":"https:\/\/kinit.sk\/wp-content\/uploads\/2024\/02\/202401_projects_features_update_o2.png","contentUrl":"https:\/\/kinit.sk\/wp-content\/uploads\/2024\/02\/202401_projects_features_update_o2.png","width":1201,"height":629},{"@type":"BreadcrumbList","@id":"https:\/\/kinit.sk\/sk\/dealing-with-sensitivity-of-large-language-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/kinit.sk\/sk\/uvod\/"},{"@type":"ListItem","position":2,"name":"News","item":"https:\/\/kinit.sk\/category\/news\/"},{"@type":"ListItem","position":3,"name":"Dealing with sensitivity of Large Language Models"}]},{"@type":"WebSite","@id":"https:\/\/kinit.sk\/#website","url":"https:\/\/kinit.sk\/","name":"KInIT","description":"Vyu\u017e\u00edvame v\u00fdskum pre \u013eud\u00ed a priemysel","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/kinit.sk\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"sk-SK"},{"@type":"Person","@id":"https:\/\/kinit.sk\/#\/schema\/person\/8b175aaaf3267b5bbbbb97e4a6db8cea","name":"Marianna Palkova"}]}},"_links":{"self":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/posts\/33706","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/users\/26"}],"replies":[{"embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/comments?post=33706"}],"version-history":[{"count":1,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/posts\/33706\/revisions"}],"predecessor-version":[{"id":33707,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/posts\/33706\/revisions\/33707"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/media\/31266"}],"wp:attachment":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/media?parent=33706"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/categories?post=33706"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/tags?post=33706"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}