Large platform algorithms need independent oversight
Public pressure is forcing platforms to commit to mitigating the spread of false information or propaganda through their algorithms. However, they do not apply stronger measures and tend to resist external control. Therefore, initiatives emerge conducting independent studies, which measure the tendencies of false information and propaganda spreading on the so-called black box principle.
Large platform algorithms contribute significantly to the dissemination of false information online. They prefer exciting, emotional or otherwise attractive content which keeps longer attention of their users (who are thus more exposed to monetizable advertising). Disinformation, misinformation, but also tabloid content or other shallow formats have these properties more often and therefore, they tend to be recommended more than “boring” credible content. Platforms know about the preferences of their algorithms, but they are reluctant to significantly change them: they are afraid of losing their income.
On (non) fulfillment of obligations to prevent misinformation by large platforms
Public pressure (mostly in the West) is forcing platforms to commit themselves to reducing the dissemination of false information by their algorithms. An example of such commitments in the EU at the voluntary level is the Code of Practice on Disinformation, which is to be strengthened this year, following the findings of regulators. These obligations will later be translated into legislation within the Digital Services Act.
However, the fulfillment of the existing voluntary commitments has so far been more of a delaying tactic: the commitments are difficult to measure, control or enforce. The declared effects are rather small. The platforms use various blue-washing techniques. Until now, platforms have only been committed to self-reporting, and the rules have also been too broadly defined. This way, the platforms could, for example, choose the period from which they will report data to the European Commission, or the way to aggregate this data. Then, more problematic periods, such as the time before the elections or data from smaller countries, like Slovakia, are not evaluated separately. In this way, the platforms are able to slip elegantly out of these early regulations.
Self-regulation is also popular among platforms: platforms issue their own codes of conduct, which they promise to comply with. However, these codes often remain halfway to the desired state. But because they represent “at least something”, they silence some critics and leave many facts hidden. The public may sometimes learn about them, but even then, they cannot be verified.
How to do independent audits on Youtube
Independent audits work on different principles than self reporting. In this context, an audit is a method of systematically examining the content presented to users by the platform. For example, it is possible to measure the frequency of recommendations of potentially harmful content under different conditions (e.g. the demographic profile of the user or his history on the platform). Sometimes audits are done in collaboration with human users, other times bots (programmed agents) are involved and act as users that systematically record the platform’s responses to their pre-programmed behaviour.
As an example, we can consider KInIT’s own auditing research1 that won the award for best paper at the highly selective scientific conference ACM Recommender Systems 2021. In it, we audit the YouTube’s platform recommendation algorithms, about which we know only a little. The platform works like a black box and users do not know how and on what basis it recommends the videos. In our study, we looked at whether filter bubbles were created on YouTube and whether users could get out of them by watching videos that prove and explain the falsity of a particular misinformation.
In our study, we showed the presence of disinformation filter bubbles on Youtube in the case of lists of recommended videos (but not search results). We’ve also found that YouTube hasn’t improved much in spreading false information and misleading videos compared to a similar study conducted in the past. We expected more disinformation videos about vaccination due to the ongoing pandemic, but there were also a lot of false information about the events of September 2011. On the other hand, our research showed that watching credible (misinformation debunking) videos helped reduce the bubbles for almost all topics.
Table: Sample from our audit of conspiracy topic recommendations on Youtube. We compared the data with an audit carried out in the past by a team around Hussein2. -1 indicates videos that refute conspiracies, +1 that spread them
Independent Facebook audits and their silencing
Many researchers have already been convinced that platforms do not like independent audits of recommended content. No wonder, it’s basically about uncovering their weaknesses. The latest scandal was caused by Facebook, which deleted accounts, pages and applications for experts from the University of New York. These academics examined the transparency of political ads on Facebook. Their research revealed about 86 thousand misleading ads, of which almost a quarter was funded by dubious communities. The researchers used volunteers to collect data for this purpose. Facebook cynically appealed to user privacy as the reason for closing the researchers’ accounts.
Even Facebook’s internal structures have been altered, hampering the misinformation spreading transparency. Specifically, the CrowdTangle tool team, which became part of
Facebook after it acquired CrowdTangle. Thanks to this tool for analysing and evaluating Facebook data, it was previously found that the political far-right movements received too much attention on Facebook compared to other content. These results showed Facebook as an unbalanced resource with a large conservative echo chamber. And it was this image that the executives of the platform do not like. Some of them were even of the opinion that instead of opening its data for analytical purposes, Facebook should publish only manually and selectively picked data reports. For now, CrowdTangle continues to provide its services, but people close to the tool fear that this may not be the case in the future. According to them, it is more important for Facebook to continuously improve its image as a problem-free platform over the effort to really solve the problems that the platform has.
We are continuing our research
Our study is just one of our contributions to the fight against misinformation. Our long-term goal is to create tools that will be able to independently evaluate the role of social media algorithms in the dissemination of misinformation. Participation in the EDMO consortium, which fights against misinformation at European level, will also help us.
1Matus Tomlein, Branislav Pecher, Jakub Simko, Ivan Srba, Robert Moro, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, and Maria Bielikova. 2021. An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting and Recent Behavior Changes. The ACM Conference Series on Recommender Systems.
2 Eslam Hussein, Prerna Juneja, and Tanushree Mitra. 2020. Measuring Misinformation in Video Search Platforms: An Audit Study on YouTube. Proc.ACM Hum.-Comput. Interact.4, CSCW1, Article 048 (May 2020), 27 pages.