Why is the internet a good servant and a bad master?

The development of the internet, as a global computer network, and the subsequent emergence of the web for information sharing, are inventions of similar importance to the invention of the printing press. 

Both of these technologies have greatly contributed to the accessibility of knowledge and information. No longer do we have to hunt for books, endure long waits for library resources, or painstakingly sift through volumes for information. Today, all it takes is a simple query typed into our pocket-sized devices. This has effectively removed another barrier to accessing information.

Moreover, technology has facilitated connections with friends, family, and even strangers, turning our social networks into virtual communities.

In recent years, artificial intelligence has also joined the ranks of technologies that have revolutionized our lives. Why now? Primarily due to the continual advancement in the computational power of computers. The level of operations achievable on computers today has reached a stage where we can effectively create neural models — the core of modern artificial intelligence.

The artificial intelligence we have today remains far from its depiction in films. It lacks a human form and consciousness (and it won’t have consciousness for some time). Nevertheless, its applications continue to impress us with the progress it makes and the complex problems it helps us to solve.

Artificial intelligence, particularly when combined with the internet, can offer us much, but it can also pose some risks. It is crucial for us to use new technologies wisely and be aware of the risks associated with them.

But let’s take it step by step.

What is artificial intelligence?

Artificial intelligence refers to computer programs that have been developed through a process known as training and can solve difficult tasks. In other words, they process inputs they haven’t been explicitly trained for (ones they haven’t encountered before). Training involves creating a computer program (more precisely: a model), but it does not follow the traditional programming approach, i.e., writing program code, which has been the predominant method of creating computer programs until now.

The model represents a fundamental element of artificial intelligence. Essentially, it’s a trainable entity, capable of performing specific tasks. One of the most powerful models in use today is the artificial neural network, drawing inspiration from the human brain. Human brain consists of billions of neurons and trillions of connections between them. This architecture is also replicated by artificial neural networks, although their neurons are not living cells but rather simple mathematical functions.

In essence, an artificial neural network is a complex network of numbers with the ability to store vast amounts of information and solve various tasks, such as answering whether there’s a dog in a displayed image or engaging in conversations with users on a wide array of topics.

How does artificial intelligence work?

The life cycle of artificial intelligence consists of two fundamental stages: the already mentioned training and the subsequent operation.

Training, or the learning phase of artificial intelligence (more precisely: the artificial intelligence model), can take place through various approaches. One such approach involves presenting the model with diverse examples of a task we want the model to learn to solve.

Consider this example: We aim to teach the model to recognize whether there is a dog in an image. During the training process, we expose it to hundreds, even thousands of dog images, alongside thousands of other images featuring creatures or objects that are not dogs. The more examples — both positive and negative — we provide, the better the model we can train.

During the training phase, the model adapts its internal structure, i.e., those billions of numbers, to ensure accurate responses. The precise learning mechanism is determined by the type (or architecture) of the neural network designed for that specific task. We know many types and architectures, but that’s a topic for another discussion.

Operation of artificial intelligence is the use of the trained model. In our scenario, we present the model with a new image it has not encountered before. Based on its internal numerical structure, the model determines whether the image contains a dog or not. This task can be quite challenging sometimes.

Artificial intelligence working with images is highly beneficial in various domains. For instance, in the field of medicine, it can be used for identification of diseases from medical images. Similarly, in transportation, it facilitates the detection of objects for autonomous vehicles, which operate without a driver.

Another application of artificial intelligence lies in its text-processing capabilities. Consider, for instance, the spam filter in your email application, which distinguishes between credible and potentially harmful sources of incoming mail. Even more illustrative example of text-based artificial intelligence is an intelligent chatbot, such as the well-known ChatGPT. These systems are based on the so-called large language models.

Much like the models employed for image processing, large language models operate as artificial neural networks. The process of training these models is intriguing: the examples provided to the model are essentially sentences. Billions of sentences. The same principle holds true here: the more examples, the better. We present these sentences to the model gradually, hiding some words from those sentences and training it to guess the omitted words. Using the immense amount of data it holds, the model encodes word sequences in a manner that enables it to generate them in a meaningful order. Therefore, when completing the sentence “____ flows through Bratislava”, it won’t suggest “a “bear” or a “certificate”, but words such as a “river” or “Danube”. The model will choose words that closely align with its learned examples.

The functionality of artificial intelligence is rooted in statistics. Its outputs do not necessarily give the correct answer, but rather the most probable one.

You can try an older, yet purely Slovak large language model here: https://slovakbert.kinit.sk/.

Apart from image and text processing, artificial intelligence finds application in handling various other types of data. Notably, it has been successful in providing various forms of  recommendations (products, services, movies, or  social network content).

This brings us to understanding the relationship between the internet and artificial intelligence. Their fusion has given rise to a powerful and highly pervasive super-technology. Consequently, it can have a significant impact on us.On one hand, it can be positive, but it also can have its downsides. Let’s take a look at three instances of the combination of artificial intelligence and the internet, along with key characteristics that deserve our attention.

Recommender Systems

We are all familiar with social networks like Facebook, TikTok, or YouTube. We enjoy them for their ability to swiftly provide us with captivating content with minimal effort. Content recommendation enginesalso known as recommender systems, are at the core of social networks. These engines track our behavior and the activities of our friends to curate content that suits our preferences.

At first glance, this seems fine. However, it’s crucial to recognize that every social network is a commercial product designed to generate profit. Current revenue models aim to maximize user engagement to boost advertising and paid content. To achieve this, recommendation engines often employ artificial intelligence techniques not only to suggest content of interest but also to keep us engaged for as long as possible.

Social networks leverage natural human traits. and they employ sophisticated tricks to take advantage. They often prioritize controversial, emotionally charged content and hate speech as they tend to attract more attention. Unfortunately, this can lead to a distorted perception of reality, portraying the world in a more negative light than it truly is.

These platforms incorporate instant “reward” mechanisms, such as reactions and likes, providing instant gratification that can potentially foster addiction. Alternatively, they recommend agreeable content that is likely to resonate with our beliefs, thus trapping us in information bubbles that shape our opinions and influence us without our awareness.

By the way, at the Kempelen Institute of Intelligent Technologies, we also focus on researching information bubbles.

To mitigate the adverse effects of these technologies, it is important to understand their underlying mechanisms. It’s neither necessary nor beneficial to completely stop using them. However, it’s vital to limit the time we spend on these platforms and actively engage in social interactions beyond the digital realm.


Less than a year ago, the introduction of a new generation of chatbots led by ChatGPT created a great sensation. These chatbots are no longer pre-programmed for user interaction based on a few predefined scenarios (the dominant paradigm before). They rely on large language models (yes, the ones mentioned earlier), giving them nearly unlimited communication capabilities. However, their reliability and credibility are somewhat questionable.

Today’s intelligent chatbots offer incredible advantages: they enable effortless communication allowing us to finally ask questions in human language. They can respond considering the context of the conversation or our profile, which they have built over time. They serve as a prompt and comprehensive source of information. One of their greatest advantages lies in their ability to educate us without getting tired; they are always eager to respond repeatedly.

However, we must acknowledge that this technology remains relatively new, experimental, and unexplored. While they may give the impression of intelligence, today’s chatbots are not truly intelligent (at least not yet). They contain various biases and prejudices that are not fair (towards certain groups of people) and can exert negative influence. For instance, by distorting historical facts or expressing negativity about specific minorities. The language models are only as good as the data used for their training. Given the abundance of content on the internet, it is impossible for the creators of these models to have complete control over everything.

Additionally, it is important to note that language models can sometimes generate fabricated information. When they do not know the answer, they come up with nonsense. This phenomenon is referred to as “hallucinating.” This characteristic rises from the statistical nature of the artificial intelligence models integrated into chatbots.  It is a phenomenon that we, researchers, do not yet fully understand. A lot of research is ahead of us.

Chatbots represent a technology with enormous potential. However, it is crucial not to overlook their limitations. To avoid falling into their traps, we must be mindful of their constraints. Although chatbots might seem human-like and trustworthy, they lack a comprehensive understanding of the world. Let’s think critically to avoid being deceived by them.

Deepfakes (and other information disorders)

Deepfakes, hoaxes, fake news, and disinformation are collectively referred to as information disorders. These issues have gained significant attention in recent times, with modern technologies contributing to their widespread propagation. The low cost of creating fabricated content, coupled with its high credibility and potentially far-reaching impact, is a cause for major concern. It’s important to note that artificial intelligence itself does not generate these disorders. It is the misuse of technology by (human) individuals seeking to cause harm or gain an unfair advantage in their lives, including garnering attention, followers, or influence.

This is not entirely shocking or novel. Smart communication strategies have played pivotal roles in many historical moments. However, the nature of communication in today’s world has various characteristics, partly influenced by modern technologies, to which we are not (evolutionary) adapted. 

Information spreads extremely quickly and can reach a vast audience in no time. Given the complexity of topics discussed today, immediate comprehension is not always possible. Furthermore, in communication, there are various cognitive (biological) phenomena (such as confirmation bias, unconscious biases, etc.) that hinder us from always processing information in a rational and critical manner. All of these aspects lay the foundation for communication to be manipulated for large-scale disinformation campaigns, particularly when spread through the internet.

Unfortunately, there is no simple solution to these disorders. Addressing these challenges lies in the honest work of journalists, media professionals, and scientists. We have to support fact-checking initiatives and streamline the work of fact-checkers. Professionals , who are concerned with information disorders come together within collaborative media observatories, such as the Central European Digital Observatory ( CEDMO). We, as internet users and ultimate consumers of information, must use and pursue our most potent tool: critical thinking. False information and other information disorders are likely to persist for the foreseeable future, so we must learn to live with them. 

Examples of information disorders: False photograph of Pope Francis published on the social network Reddit (left) and visualization of exposed misinformation about the coronavirus (right).

The article was written for the Children’s Comenius University, it was also published in their yearbook 2023.