{"id":29141,"date":"2023-10-19T14:48:28","date_gmt":"2023-10-19T12:48:28","guid":{"rendered":"https:\/\/kinit.sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/"},"modified":"2023-10-19T15:48:08","modified_gmt":"2023-10-19T13:48:08","slug":"modern-unsupervised-learning-can-we-bootstrap-our-own-latent","status":"publish","type":"post","link":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/","title":{"rendered":"Modern Unsupervised Learning &#8211; Can We Bootstrap Our Own Latent?\u00a0"},"content":{"rendered":"<div id=\"\" class=\"element core-paragraph\">\n<p><em>This blog post is a part of the <\/em><a href=\"https:\/\/kinit.sk\/sk\/letna-skola-strojoveho-ucenia-v-kosiciach-tyzden-obohacujucich-poznatkov-a-networkingu\/\"><em>EEML Summer School 2023<\/em><\/a><em> series. It\u2019s a series of impressions that we, as doctoral students, got from attending the EEML Summer School 2023 in Ko\u0161ice, Slovakia.&nbsp;<\/em><\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-image\">\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" data-src=\"https:\/\/lh6.googleusercontent.com\/D5KZnAdcII2-gGNp0JKrLDY8sxp_bdK4VD6mRAnjc1fIh4wP7WtfUPU_FMZCcDipvGM-TtPKYzy9GMmjAQKbwyqIPM0bdX7zd63XmHWZBretj7SpWyb1T1YAbOEab8DqHwCFOFF1DaEDwB5jJBgD1EY\" alt=\"\" style=\"--smush-placeholder-width: 659px; --smush-placeholder-aspect-ratio: 659\/330;width:659px;height:330px\" width=\"659\" height=\"330\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" \/><figcaption class=\"wp-element-caption\">Figure 1: The motivation picture for <em>bootstrapping<\/em> comes from the idiom \u201cto pull oneself up by one\u2019s own bootstraps\u201d. <a href=\"https:\/\/img.huffingtonpost.com\/asset\/5b6b3f1f2000002d00349e9d.jpeg\" target=\"_blank\" rel=\"noreferrer noopener\">Source<\/a>.&nbsp;<\/figcaption><\/figure>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>AI solutions, mainly deep neural networks nowadays, are often used to create useful representations of the problem we have at hand. These can, for example, be representations of images, videos, sounds, text, user behavior, behavior of programs, and much much more. In the past, supervised solutions achieved much better results in terms of good representations of these modalities.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>However, the situation is slowly changing with the advent of new unsupervised learning solutions. In this post, we will introduce one of them &#8211; Bootstrap Your Own Latent (<a href=\"https:\/\/arxiv.org\/abs\/2006.07733\" target=\"_blank\" rel=\"noreferrer noopener\">BYOL<\/a>). BYOL is inspired by previously published contrastive learning methods, although it cannot be strictly considered a contrastive learning method. It was introduced to us at the Eastern European Machine Learning Summer School 2023 by one of its authors, <a href=\"https:\/\/www.linkedin.com\/in\/michalvalko\/\" target=\"_blank\" rel=\"noreferrer noopener\">Michal Valko<\/a>.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-heading\">\n<h3 class=\"wp-block-heading\">Contrastive learning<\/h3>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Typical contrastive learning methods work based on a relatively simple concept. They repulse different images (negative pairs) while attracting the same image\u2019s two views (positive pairs) [1]. Negative pairs are primarily used so that the learned solution (data representation) does not degenerate into a collapsed solution &#8211; i.e. where the same vector would represent all images.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Typically, one would require a lot of negative samples in each training step. This is because there are, let\u2019s say, much fewer ways of how <em>one can be a dog<\/em> than the ways that <em>one can <\/em><strong><em>not <\/em><\/strong><em>be a dog<\/em>. There are, for example, other animals, like cats, mice, and birds (which are not dogs), but there are also many different other things that are not dogs &#8211; like cars, houses, mountains, lakes, etc. Often, to get a good representation, the method needs to understand these differences. An example of what pairs could be used in 1 round (batch) of training a contrastive learning method is depicted in Figure 2.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-image\">\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" data-src=\"https:\/\/lh4.googleusercontent.com\/XQ57WdmKHnXWe_FcsF2jzXHGWU81S4a-BqgajzU--E8wOBBENABCLzJG76OaVjpVLWHYlBDVuoWmjbGY62fmyHBL4p46RCPa0I9jFjFiQRes8fXpxcR6CRWBOFTadJ0eMTpvqH9WaT-5j6RTVLA6nyQ\" alt=\"\" style=\"--smush-placeholder-width: 666px; --smush-placeholder-aspect-ratio: 666\/383;width:666px;height:383px\" width=\"666\" height=\"383\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" \/><figcaption class=\"wp-element-caption\">Figure 2:&nbsp; Depiction of positive and negative pairs in a contrastive learning (CL) setting. On the left we have a self-supervised CL context where image labels are not utilized. On the right, supervised CL is depicted where label information is used. <a href=\"https:\/\/www.v7labs.com\/blog\/contrastive-learning-guide\" target=\"_blank\" rel=\"noreferrer noopener\">Source<\/a><\/figcaption><\/figure>\n<\/div>\n\n<div id=\"\" class=\"element core-heading\">\n<h3 class=\"wp-block-heading\">Bootstrap Your Own Latent (BYOL)<\/h3>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Bootstrap Your Own Latent (<a href=\"https:\/\/arxiv.org\/abs\/2006.07733\" target=\"_blank\" rel=\"noreferrer noopener\">BYOL<\/a>) is different from typical contrastive learning methods because it does not utilize <em>any<\/em> negative pairs. <strong>This could help researchers working with other data modalities than images because finding the \u201cright\u201d negative examples for speech, sound, text, or other kinds of data could often be challenging<\/strong>. The authors of BYOL worked with images. So to make things easier, when talking about data in this article, we will talk primarily about images.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>The main idea of BYOL is to use <em>bootstrapping<\/em><strong> <\/strong>(in its idiomatic sense)<strong> <\/strong>&#8211; i.e., to gradually improve upon itself (its representation of images) without some new external guidance \u2013 a <strong>self-learning, continual self-improving process<\/strong>. The hope is that such a bootstrapping approach will help avoid solution collapse. The term bootstrapping originates from the English idiom \u201cto pull oneself by their own bootstraps\u201d \u2013 meaning that a person should be able to improve by themselves without external help. This is also why pulling boots by the bootstraps is the headline image of our blog post (Figure 1).&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Specifically for BYOL, bootstrapping is done on the projections of image representations. This is achieved by utilizing two neural networks &#8211; one <em>online<\/em> network and one <em>target<\/em> network. Figure 3 shows the architecture of BYOL. The BYOL process is also explained in the figure description. The bootstrapping works in a way that the online network tries to predict the projection of the target network. <em>Only the online network<\/em> receives gradient updates. The target network\u2019s weights are set as the exponential moving average of the sequence of weights of the online network.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-image\">\n<figure class=\"wp-block-image size-large is-resized\"><img decoding=\"async\" data-src=\"https:\/\/kinit.sk\/wp-content\/uploads\/2023\/10\/Snimka-obrazovky-2023-10-19-o-14.36.17-1024x351.png\" alt=\"\" class=\"wp-image-29131 lazyload\" style=\"--smush-placeholder-width: 688px; --smush-placeholder-aspect-ratio: 688\/236;width:688px;height:236px\" width=\"688\" height=\"236\" data-srcset=\"https:\/\/kinit.sk\/wp-content\/uploads\/2023\/10\/Snimka-obrazovky-2023-10-19-o-14.36.17-1024x351.png 1024w, https:\/\/kinit.sk\/wp-content\/uploads\/2023\/10\/Snimka-obrazovky-2023-10-19-o-14.36.17-300x103.png 300w, https:\/\/kinit.sk\/wp-content\/uploads\/2023\/10\/Snimka-obrazovky-2023-10-19-o-14.36.17-768x264.png 768w, https:\/\/kinit.sk\/wp-content\/uploads\/2023\/10\/Snimka-obrazovky-2023-10-19-o-14.36.17-1536x527.png 1536w, https:\/\/kinit.sk\/wp-content\/uploads\/2023\/10\/Snimka-obrazovky-2023-10-19-o-14.36.17.png 1906w\" data-sizes=\"(max-width: 688px) 100vw, 688px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" \/><figcaption class=\"wp-element-caption\">Figure 3: The architecture of BYOL. One image gets transformed into two views. Afterwards, one image gets fed into the online network and one into the target network. Their representations and projections from the respective networks are extracted and then the online network tries to predict as best as possible the projection from the target network. Loss gets computed and a gradient update of the online network is performed. <a href=\"https:\/\/arxiv.org\/pdf\/2006.07733.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Source<\/a><\/figcaption><\/figure>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-heading\">\n<h3 class=\"wp-block-heading\">BYOL\u2019s results<\/h3>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>BYOL achieves results that are on par, if not better than, the state-of-the-art (SotA) solutions &#8211; like SimCLR (a strong baseline) and MoCo. Both MoCo and SimCLR are unsupervised baselines. On ImageNet with linear evaluation, BYOL achieved the best results out of the evaluated unsupervised methods for various levels of the number of parameters. The evaluation can be found in Figure 4. Its results reach very close to the Supervised solutions, which is a tough baseline to beat since these solutions have the advantage of utilizing image labels in their learning process &#8211; a luxury that BYOL does without.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-image\">\n<figure class=\"wp-block-image\"><img decoding=\"async\" data-src=\"https:\/\/lh4.googleusercontent.com\/RfeA0bZW9K-TwMXziQmTsp-HTX_vW_NSleDohwGVafYS0R8bSGYjSRKBIN6BP2rYgmnSIu8C3vc7__QRh11sBDjk4izW_eW2MsmaK1Zc58zbYJkz6VCj_bZqMB4XrdHL9suXfTn7RHt1RIhr3gzQJDg\" alt=\"\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" \/><figcaption class=\"wp-element-caption\">Figure 4: The evaluation of BYOL and other baseline SotA methods on the ImageNet dataset with linear evaluation on the Top-1 accuracy. <a href=\"https:\/\/arxiv.org\/pdf\/2006.07733.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Source<\/a><\/figcaption><\/figure>\n<\/div>\n\n<div id=\"\" class=\"element core-heading\">\n<h3 class=\"wp-block-heading\">BYOL-Explore: Moving beyond image representations<\/h3>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Enter the world of reinforcement learning (\u201cRL\u201d in short). In such a world, we try to create an autonomous agent that will be able to make decisions on its own based on its environment and the target goal that they have. There are many applications of RL agents in specific environments &#8211;&nbsp; such as AlphaZero for playing chess, AlphaGo for playing Go, or even OpenAI Five for playing Dota 2. For BYOL-Explore, we consider RL agents that act in computer games. For an agent to be effective in such a world, it needs to have some model of the world (i.e. the game) &#8211; a \u201cworld model\u201d. In these complex environments, it is completely infeasible for an agent to explore everything and go to every place. Therefore, the agent needs to learn to intelligently prioritize the areas where it expects to achieve a better reward.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>BYOL-Explore is one such instantiation of an RL agent. It instantiates its world model by using BYOL. Afterwards, it improves on the standard BYOL model using curiosity-driven exploration of the game. This can also be considered a form of <em>bootstrapping <\/em>&#8211; the term we used to explain the learning process for the simple BYOL model itself. In this way, BYOL-Explore slowly trains itself to recognize what is interesting and what should be explored &#8211; a crucial thing that needs to be done to avoid path explosion (the explosion of the possible combinations of environments and actions) and move towards the goal.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>The improvement of the model is done in the following way:&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-list\">\n<ul class=\"wp-block-list\"><div id=\"\" class=\"element core-list-item\">\n<li>The agent asks the model \u201cquestions\u201d about the world &#8211; particularly \u201cwhat can be found behind these doors\u201d, etc.<\/li>\n<\/div>\n\n<div id=\"\" class=\"element core-list-item\">\n<li>The agent notes where the model makes mistakes.<\/li>\n<\/div>\n\n<div id=\"\" class=\"element core-list-item\">\n<li>The agent is rewarded for fooling the model (or rather finding where it makes mistakes).<\/li>\n<\/div><\/ul>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>By following this training pattern, the agent tries to find places\/things in the game that are unexpected for the model and gets rewarded for it. This helps the agent create a better world model. If the model is trained well, achieving the agent&#8217;s goal should be much easier \u2013 i.e. to pass to another level or perform some tasks that give rewards.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>In this <a href=\"https:\/\/drive.google.com\/file\/d\/1yGFSUNIWbVavgfYswFxkQgeifFuCvyEk\/view\" target=\"_blank\" rel=\"noreferrer noopener\">video<\/a>, we can see the BYOL-Explore agent solving a particular in-game task called Throw Across. The model only sees the images (video) that we can see under the \u201cFirst person\u201d category. The \u201cFollowing\u201d and \u201cTop down\u201d views serve us to understand better what is going on in the game. All in all, BYOL-Explore solved tasks which were previously not solved without utilizing \u201chuman help\u201d \u2013 i.e. without path mimicking (human demonstration), which is a very nice accomplishment. It solved 5.5 out of 8 tasks on DeepMind\u2019s set of 8 problem tasks, called DM-Hard-8, which require exploration in partially observable environments to solve them.\u00a0\u00a0<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-heading\">\n<h3 class=\"wp-block-heading\">Can BYOL be successfully applied in the malware domain?<\/h3>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>As a bonus, we also decided to briefly write about whether BYOL can be utilized in our related research as well. One of the PhD theses here at KInIT is in the field of malware, specifically how to create the best clustering models in this domain. The most common tools that are also most frequently used for malware clustering have some limitations. If one wants to create a really powerful clustering model, these common tools are insufficient. <strong>Therefore, our mission for the upcoming weeks and months is to improve the state-of-the-art results in malware clustering by utilizing some form of self-supervised learning<\/strong>. More specifically, models like BYOL seem to be ideal candidates, since they allow us to worry less about the negative samples that we would otherwise need to use.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>The modern self-supervised learning solutions such as BYOL, however, still require some form of data transformations to be able to work correctly. This is relatively easily done in the domain of image transformations &#8211; since one can utilize rotations, crops, image masks, etc. In the malware domain, it is a much more difficult problem to find the right kind of transformations. We hypothesize that, if one were to have a big dataset of samples that is rich enough, it could be possible to train SSL methods just based on the samples that are in the dataset without the utilization of data transformations. Malware is often found in many different, but very similar, variations in the wild. This could serve as natural forms of data transformations that one does not need to create by themselves. Alternatively, there are some kinds of data transformations of executable programs that do not change the executability or the maliciousness of the program that also can be utilized. However, such transformations could by some experts be deemed relatively trivial. It is an open question whether such transformations could help an SSL method learn a better representation.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>All in all, we believe that representations, such as those that an Autoencoder model can learn, can be enhanced by training the model to specifically learn to put some samples closer together. <strong>As far as we know, models like BYOL have never been tried before in the malware domain for malware clustering. Therefore, this is a fascinating research avenue for us<\/strong>.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-heading\">\n<h3 class=\"wp-block-heading\">Conclusion<\/h3>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>We were inspired to write this article to bring the readers\u2019 attention to the relatively recent new developments in the field of deep learning, particularly self-supervised learning. Self-supervised learning is a new way for researchers in academia and in practice to achieve state-of-the-art results based on representations learned in a practically fully unsupervised context.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>BYOL is an example of a well-engineered method which can achieve near-supervised learning performance on ImageNet. It does not mean, however, that it cannot reach it and even move beyond the performance of the supervised methods.&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>The data labels help neural networks achieve a well-trained model in a relatively short time. However, the labels could also sometimes be a limit to the potential of the learned representations &#8211; since the labels of the images can inherently include some bias, or due to errors could even be flat-out wrong in some cases.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-heading\">\n<h3 class=\"wp-block-heading\">Michal Valko &#8211; a short bio<\/h3>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>Michal is an accomplished researcher in the field of AI who currently works at Google DeepMind in Paris. He specializes in learning representations that require little-to-no human supervision. This includes deep reinforcement learning, bandit algorithms, or self-supervised learning. Besides being a successful AI researcher, he is also a Slovak with roots in Ko\u0161ice. We imagine that organizing and participating in EEML 2023 in Slovakia must have been a small dream come true for him.<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-image\">\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" data-src=\"https:\/\/lh4.googleusercontent.com\/UBOF2kzKE66nE65APErNcVk-squk-xsm5wBgZjBZO4UD1RpedWA5Esn-jIL0eUrQV_CbMYP5N2BFVgJAFktfRSaInJMjh-DkupKDjAF_Ou2dq4zX-vR8lUWi1ZsRqVqv5sEWJxwxA1TEiwr00hgj_KM\" alt=\"\" style=\"--smush-placeholder-width: 643px; --smush-placeholder-aspect-ratio: 643\/556;width:643px;height:556px\" width=\"643\" height=\"556\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" \/><figcaption class=\"wp-element-caption\">Figure 5: Michal Valko at EEML 2023 in Ko\u0161ice.<\/figcaption><\/figure>\n<\/div>\n\n<div id=\"\" class=\"element core-heading\">\n<h3 class=\"wp-block-heading\">References:<\/h3>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>[1] Chen, X. and He, K., 2021. Exploring simple siamese representation learning. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 15750-15758).&nbsp;<\/p>\n<\/div>\n\n<div id=\"\" class=\"element core-paragraph\">\n<p>[2] Grill, J.B., Strub, F., Altch\u00e9, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M. and Piot, B., 2020. Bootstrap your own latent &#8211; a new approach to self-supervised learning. Advances in neural information processing systems, 33, pp.21271-21284.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>This blog post is a part of the EEML Summer School 2023 series. It\u2019s a series of impressions that we, as doctoral students, got from attending the EEML Summer School&#8230;<\/p>\n","protected":false},"author":26,"featured_media":29134,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[88,142],"tags":[402],"class_list":["post-29141","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-pop-science-sk","category-2023-sk","tag-machine-learning-sk"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Modern Unsupervised Learning - Can We Bootstrap Our Own Latent?\u00a0 - KInIT<\/title>\n<meta name=\"description\" content=\"This blog post is a part of the EEML Summer School 2023 series. It\u2019s a series of impressions that we, as doctoral students, got from attending the EEML Summer School 2023 in Ko\u0161ice, Slovakia.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/\" \/>\n<meta property=\"og:locale\" content=\"sk_SK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Modern Unsupervised Learning - Can We Bootstrap Our Own Latent?\u00a0 - KInIT\" \/>\n<meta property=\"og:description\" content=\"This blog post is a part of the EEML Summer School 2023 series. It\u2019s a series of impressions that we, as doctoral students, got from attending the EEML Summer School 2023 in Ko\u0161ice, Slovakia.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/\" \/>\n<meta property=\"og:site_name\" content=\"KInIT\" \/>\n<meta property=\"article:published_time\" content=\"2023-10-19T12:48:28+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-10-19T13:48:08+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/kinit.sk\/wp-content\/uploads\/2023\/10\/202310_web_news_EEML_articles_3_feature.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1201\" \/>\n\t<meta property=\"og:image:height\" content=\"629\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Marianna Palkova\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@kinit\" \/>\n<meta name=\"twitter:site\" content=\"@kinit\" \/>\n<meta name=\"twitter:label1\" content=\"Autor\" \/>\n\t<meta name=\"twitter:data1\" content=\"Marianna Palkova\" \/>\n\t<meta name=\"twitter:label2\" content=\"Predpokladan\u00fd \u010das \u010d\u00edtania\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 min\u00fat\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\\\/\"},\"author\":{\"name\":\"Marianna Palkova\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/#\\\/schema\\\/person\\\/8b175aaaf3267b5bbbbb97e4a6db8cea\"},\"headline\":\"Modern Unsupervised Learning &#8211; Can We Bootstrap Our Own Latent?\u00a0\",\"datePublished\":\"2023-10-19T12:48:28+00:00\",\"dateModified\":\"2023-10-19T13:48:08+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\\\/\"},\"wordCount\":2042,\"image\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/202310_web_news_EEML_articles_3_feature.png\",\"keywords\":[\"machine learning\"],\"articleSection\":[\"Pop science\",\"2023\"],\"inLanguage\":\"sk-SK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\\\/\",\"url\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\\\/\",\"name\":\"Modern Unsupervised Learning - Can We Bootstrap Our Own Latent?\u00a0 - KInIT\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/202310_web_news_EEML_articles_3_feature.png\",\"datePublished\":\"2023-10-19T12:48:28+00:00\",\"dateModified\":\"2023-10-19T13:48:08+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/#\\\/schema\\\/person\\\/8b175aaaf3267b5bbbbb97e4a6db8cea\"},\"description\":\"This blog post is a part of the EEML Summer School 2023 series. It\u2019s a series of impressions that we, as doctoral students, got from attending the EEML Summer School 2023 in Ko\u0161ice, Slovakia.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\\\/#breadcrumb\"},\"inLanguage\":\"sk-SK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/kinit.sk\\\/sk\\\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"sk-SK\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\\\/#primaryimage\",\"url\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/202310_web_news_EEML_articles_3_feature.png\",\"contentUrl\":\"https:\\\/\\\/kinit.sk\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/202310_web_news_EEML_articles_3_feature.png\",\"width\":1201,\"height\":629},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/kinit.sk\\\/sk\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Pop science\",\"item\":\"https:\\\/\\\/kinit.sk\\\/category\\\/pop-science\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Modern Unsupervised Learning &#8211; Can We Bootstrap Our Own Latent?\u00a0\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/#website\",\"url\":\"https:\\\/\\\/kinit.sk\\\/\",\"name\":\"KInIT\",\"description\":\"Vyu\u017e\u00edvame v\u00fdskum pre \u013eud\u00ed a priemysel\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/kinit.sk\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"sk-SK\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/kinit.sk\\\/#\\\/schema\\\/person\\\/8b175aaaf3267b5bbbbb97e4a6db8cea\",\"name\":\"Marianna Palkova\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Modern Unsupervised Learning - Can We Bootstrap Our Own Latent?\u00a0 - KInIT","description":"This blog post is a part of the EEML Summer School 2023 series. It\u2019s a series of impressions that we, as doctoral students, got from attending the EEML Summer School 2023 in Ko\u0161ice, Slovakia.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/","og_locale":"sk_SK","og_type":"article","og_title":"Modern Unsupervised Learning - Can We Bootstrap Our Own Latent?\u00a0 - KInIT","og_description":"This blog post is a part of the EEML Summer School 2023 series. It\u2019s a series of impressions that we, as doctoral students, got from attending the EEML Summer School 2023 in Ko\u0161ice, Slovakia.","og_url":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/","og_site_name":"KInIT","article_published_time":"2023-10-19T12:48:28+00:00","article_modified_time":"2023-10-19T13:48:08+00:00","og_image":[{"width":1201,"height":629,"url":"https:\/\/kinit.sk\/wp-content\/uploads\/2023\/10\/202310_web_news_EEML_articles_3_feature.png","type":"image\/png"}],"author":"Marianna Palkova","twitter_card":"summary_large_image","twitter_creator":"@kinit","twitter_site":"@kinit","twitter_misc":{"Autor":"Marianna Palkova","Predpokladan\u00fd \u010das \u010d\u00edtania":"10 min\u00fat"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/#article","isPartOf":{"@id":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/"},"author":{"name":"Marianna Palkova","@id":"https:\/\/kinit.sk\/#\/schema\/person\/8b175aaaf3267b5bbbbb97e4a6db8cea"},"headline":"Modern Unsupervised Learning &#8211; Can We Bootstrap Our Own Latent?\u00a0","datePublished":"2023-10-19T12:48:28+00:00","dateModified":"2023-10-19T13:48:08+00:00","mainEntityOfPage":{"@id":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/"},"wordCount":2042,"image":{"@id":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/#primaryimage"},"thumbnailUrl":"https:\/\/kinit.sk\/wp-content\/uploads\/2023\/10\/202310_web_news_EEML_articles_3_feature.png","keywords":["machine learning"],"articleSection":["Pop science","2023"],"inLanguage":"sk-SK"},{"@type":"WebPage","@id":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/","url":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/","name":"Modern Unsupervised Learning - Can We Bootstrap Our Own Latent?\u00a0 - KInIT","isPartOf":{"@id":"https:\/\/kinit.sk\/#website"},"primaryImageOfPage":{"@id":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/#primaryimage"},"image":{"@id":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/#primaryimage"},"thumbnailUrl":"https:\/\/kinit.sk\/wp-content\/uploads\/2023\/10\/202310_web_news_EEML_articles_3_feature.png","datePublished":"2023-10-19T12:48:28+00:00","dateModified":"2023-10-19T13:48:08+00:00","author":{"@id":"https:\/\/kinit.sk\/#\/schema\/person\/8b175aaaf3267b5bbbbb97e4a6db8cea"},"description":"This blog post is a part of the EEML Summer School 2023 series. It\u2019s a series of impressions that we, as doctoral students, got from attending the EEML Summer School 2023 in Ko\u0161ice, Slovakia.","breadcrumb":{"@id":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/#breadcrumb"},"inLanguage":"sk-SK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/"]}]},{"@type":"ImageObject","inLanguage":"sk-SK","@id":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/#primaryimage","url":"https:\/\/kinit.sk\/wp-content\/uploads\/2023\/10\/202310_web_news_EEML_articles_3_feature.png","contentUrl":"https:\/\/kinit.sk\/wp-content\/uploads\/2023\/10\/202310_web_news_EEML_articles_3_feature.png","width":1201,"height":629},{"@type":"BreadcrumbList","@id":"https:\/\/kinit.sk\/sk\/modern-unsupervised-learning-can-we-bootstrap-our-own-latent\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/kinit.sk\/sk\/"},{"@type":"ListItem","position":2,"name":"Pop science","item":"https:\/\/kinit.sk\/category\/pop-science\/"},{"@type":"ListItem","position":3,"name":"Modern Unsupervised Learning &#8211; Can We Bootstrap Our Own Latent?\u00a0"}]},{"@type":"WebSite","@id":"https:\/\/kinit.sk\/#website","url":"https:\/\/kinit.sk\/","name":"KInIT","description":"Vyu\u017e\u00edvame v\u00fdskum pre \u013eud\u00ed a priemysel","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/kinit.sk\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"sk-SK"},{"@type":"Person","@id":"https:\/\/kinit.sk\/#\/schema\/person\/8b175aaaf3267b5bbbbb97e4a6db8cea","name":"Marianna Palkova"}]}},"_links":{"self":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/posts\/29141","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/users\/26"}],"replies":[{"embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/comments?post=29141"}],"version-history":[{"count":3,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/posts\/29141\/revisions"}],"predecessor-version":[{"id":29150,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/posts\/29141\/revisions\/29150"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/media\/29134"}],"wp:attachment":[{"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/media?parent=29141"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/categories?post=29141"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/kinit.sk\/sk\/wp-json\/wp\/v2\/tags?post=29141"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}