Čo je
E-tika podcast: Life in disinformation bubbles
The goal of the E-tika podcast series is to bring the social and ethical dimensions of digital technologies into the spotlight and to discuss these with guests from various relevant fields. The second season of the E-tika podcast series is over and it is now the perfect time to reflect back on the topics we discussed.
The second series kicked off with a discussion on filter bubbles, echo-chambers, disinformation and the role of artificial intelligence. The hosts Juraj Podroužek and Tomáš Gál had invited Róbert Móro, who is a member of a research group at the Kempelen Institute of Intelligent Technologies that studies web user behaviour and user modelling.
In recent years, terms such as disinformation, hoaxes, fake-news, echo chambers and filter bubbles have been receiving a lot of attention in both mainstream use but also in scientific research. Despite the apparent chaos in terminology, they loosely capture the same phenomena. It relates to the fact that we are often kept in “bubbles” where our already established views on the world are further reinforced by social media and search engines algorithms.
To some extent, such behaviour is not novel, as heuristic and cognitive biases help us navigate the everyday loads of information. What makes filter bubbles and their potential to ignite the spread of disinformation dangerous, lies in the increasing accessibility and reach of online platforms and the effectiveness of algorithms.
To produce and share a thoroughly verified piece of information is much more time-consuming and laborious than to generate and spread disinformation. Disinformation and filter bubbles are then not only a problem of the kind of content that is being shared, but also to the extent of its reach. This requires us to wonder about its moral and ethical implications, especially relating to the freedom of speech and values of democracy which depend on the ability of citizens to make informed autonomous decisions.
The existence and emergence of filter bubbles and the mechanisms through which they can promote the spreading of disinformation is generally well researched. At KInIT, a team of researchers has recently investigated and published a research in which they observed the emergence of filter bubbles on YouTube. What makes this research unique is that it also tried to shine light on the mechanisms behind trying to “burst the bubble”.
In other words, what happens if we start watching videos that are debunking claims made in disinformation videos? On top of this, various disinformation topics were compared to better understand how the algorithm works. In some cases, YouTube is seen more actively pushing against the spread of disinformation. Other cases, such as 9/11 are on the other hand a sort of gateways into deep filter bubbles.
The question then remains: How to effectively monitor the emergence of filter bubbles and how to assess whether the platforms are delivering what they promised – to actively fight against the spread of disinformation? The answer seems to lie in continuous auditing.
This is however not without its own problems. Firstly, to run such an audit seems to be incredibly laborious since it requires manual annotation of videos. Even automatization of such processes is not without problems, as there might be further biases introduced into the audit by the human annotators.
Secondly, automatic identification of what constitutes disinformation is also a tricky task. There now exist various automated approaches that try to identify disinformation by the kind of vocabulary used, the expressiveness of the headlines but also more complex ones that take into account the sources referred to, the authors etc.
Identification of disinformation and understanding the mechanisms of filter bubbles is unquestionably important, but it is not enough. What are the most efficient strategies of addressing and fighting against disinformation and filter bubbles? As the above mentioned research has shown, it is certainly possible to burst the bubble. It is important to debunk disinformation before it starts spreading rapidly, as the corrected versions of information often do not reach the required audiences.
Another approach is to create a sort of “immunization” against disinformation, where audiences are actively trained to spot disinformation or to automatically label certain sensitive topics (such as information relating to COVID) with links to relevant authoritative information. What then becomes apparent is that technology is only part of the solution and what is required is an active communication of the relevant actors involved – be it the platforms themselves or various state institutions.
Such technological solutions can bring problems on their own even though the initial intention was good. By trying to understand the black-boxes behind filter bubbles and the spread of disinformation, we can create new black-boxes. Under the guise of fighting against disinformation, we can find ourselves on very thin ice where we feel our liberties and free speech can be endangered. Everytime we try to use technology to fix societal problems, it is important to reflect on the transparency and explainability of the technology that we employ, so that we can actively predict and avoid its unintended consequences. At KINIT, we do this by scrutinizing the research being done through a series of ethical assessments methods. This was indeed also the case with the aforementioned YouTube audit study, where the developers identified a set of potential risks and countermeasures, relating to e.g. data collection and annotation. These were then directly implemented into the design of the study and the technological tools it used.