Analysis of selected regulations proposed by the European Commission and technological solutions in relation to the dissemination of disinformation and the behaviour of online platforms

Mesarcik, M., Moro, R., Kompan, M., Podrouzek, J., Simko, J., Bielikova, M.

This study was prepared by the Kempelen Institute of Intelligent Technologies based on an order from Miriam Lexmann, Member of the European Parliament. The contract for services was signed on 1 October 2021.

Executive summary:

Freedom of access to information, freedom of speech and the ability to evaluate the veracity of information and make decisions based on it are among the fundamental pillars of democratic society. New technologies, including social media and other online platforms, have given all of us unprecedented possibilities to express our opinions and reach a potentially large audience. On the other hand, with the addition of artificial intelligence systems, above all personalised recommender systems, online platforms help disseminate false information, including disinformation with significantly negative impacts on society.

With regard to these negative impacts on the one hand and the strong respect for fundamental human rights and freedoms enshrined in European legal systems on the other, disinformation represents one of the greatest challenges for current initiatives for the creation of rules to regulate content and liability on the internet.

The aim of this study is to contribute to the ongoing discussion on possibilities of regulating online platforms and the artificial intelligence (AI) systems they use. The European Commission has presented several proposals for regulation in this regard. As stated in Part 1 of the document (Introduction), our objective was to analyse technological solutions for online platforms in relation to dissemination of disinformation and subsequently evaluate the effectiveness of the institutions of these proposed provisions, in particular the proposal for artificial intelligence act and proposal for digital services act. Specifically, we have focused on the research question:

‘Are the proposed regulations an adequately effective tool for combating disinformation in the context of the technological solutions the online platforms use?’

In Part 2 of the document (Disinformation and its dissemination on online platforms) we state that, although disinformation does not have a legal definition laid down in EU legislation, several non-binding parts of legal acts, preambles, studies, strategies and recommendations agree that disinformation can be understood as ‘verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm.’[1]

Online platforms and particularly very large online platforms have fundamentally changed the way in which people gain information and, for many, have become a main news source. However, the mission of online platforms is not to provide balance of views and objectively inform their users, but they are based on a model of attention economy. Their objective is therefore to attract users’ maximum attention for the purpose of showing online ads, which represent the main source of income for online platforms. For this, they use attractive user interfaces with habit-forming elements and personalised recommendation methods for the purpose of providing engaging content and relevant online advertising.

Recommender systems try to reduce the information load on users by filtering relevant information. Several approaches to recommendation exist; personalised approaches must always have some information about user preferences available. We have listed several possible impacts of using these systems both in general and specifically in the context of online platforms.

We have identified the key issues as following: the level of transparency of the generated recommendations and the current setting of the utility function of recommender systems in the process of training artificial intelligence models. Although it is not always technically possible to provide a satisfactory explanation as to why something has been recommended to the user (this is an ongoing research challenge), the minimum requirement is transparency at the level of inputs used by the recommender system and at the level of the chosen utility function, and the setting of its weighting, since these fundamentally affect the output recommendations. Other problems we have identified are the enclosing of users into information bubbles, bias and fairness in recommendations, and the collection of feedback and deriving user preferences in the context of privacy.

In Part 3 of the document (Selected regulations proposed by the European Commission in relation to the dissemination of disinformation and the behaviour of online platforms), we have identified a set of fundamental rights and freedoms that could potentially be affected by the legislative mechanisms to tackle the spread of disinformation on online platforms. From the users’ point of view, these may be, in particular, interference with freedom of expression, the right to information, the protection of privacy and personal data, or the right to a fair trial. However, legislative instruments can also affect the online platforms themselves, primarily in terms of the freedom to conduct business or the protection of property rights in the form of the protection of intellectual property rights.

In view of the declared objectives, we have focused on analysing the European Commission’s new proposal for legislation on artificial intelligence (proposal for regulation on artificial intelligence) and digital services, including online platforms (proposal for regulation on digital services). We have focused on the legislative instruments in these proposals for regulation that may have a greater impact on the dissemination of disinformation in the online environment.

The proposal for regulation on artificial intelligence (Artificial Intelligence Act, AIA) is conceived as horizontal regulation based on risk analysis. Artificial intelligence systems are divided into four types according to the degree of threat (or potential threat) to the rights and freedoms of individuals. In the context of tackling disinformation, the way the regulation in question defines prohibited ‘manipulative practices’ will be crucial as, according to the current wording of the proposal, support for the dissemination of disinformation by online platforms would not fall under prohibited practices. Simultaneously, it should be noted that the classification of AI systems does not allow the activities of online platforms to be included in any of the areas for the use of high-risk AI systems. Nevertheless, the proposed regulation arranges a number of interesting mechanisms for assessing the conformity or management of data that can help tackle the dissemination of disinformation.

Proposed regulation of artificial intelligence and risk assessment of AI systems. Source: European Commission website

The proposal for regulation on digital services (Digital Services Act, DSA) defines the set of entities it will apply to. We consider the requirements for online platforms and very large online platforms to be key in connection with disseminating disinformation. The vast majority of the requirements in the proposed digital services regulation concern illegal content, a category which disinformation does not always fall into. One of the few exceptions is the institution of risk assessment and the adoption of follow-up measures for very large online platforms. We also consider the provisions concerning the performance of external audits, the transparency of advertising and recommender systems and the obligation to report on transparency to be interesting. The system of liability for third-party content will not change fundamentally for online platforms, but the modified content moderation and the exact process and rights of users regarding the removal or blocking of illegal content by online platforms are stipulated in more detail.

If we take into account the model of operation of online platforms, the methods of recommendation from a technical point of view and the specifics of the examined proposals for European Commission legislation, we believe that the examined proposed legislative acts can be improved in such a way as to provide more effective tools to tackle dissemination of disinformation, which we have addressed in Part 4 (Discussion and proposed solutions).

In terms of general comments, we consider it key for the area of attention economy to be considered as an area of high-risk AI systems, so that the use of algorithms by social media does not escape the legislative requirements presented in the proposal for regulation on artificial intelligence.

Simultaneously, we understand the limits of the regulation of harmful content in terms of interference with freedom of expression and granting too much power to online platforms. On the other hand, legislation can provide a broad palette of tools that have the potential to limit the spread of disinformation on online platforms without directly regulating harmful content. As examples of the introduction of such ‘indirect’ rules, we recommend:

  • Transparency requirements and user choices in recommender systems;
  • Labelling on unverified or unverifiable content (content labelling);
  • Prohibition on promoting certain content or topics;
  • Restrictions on the use of certain methods for sensitive content (such as bots or micro-targeting in political ads).

We consider the performance of external audits absolutely essential. Although the DSA directly arranges such a mechanism, we are concerned about its limits regarding the lack of clear rules for identifying suitable entities for carrying out those external audits, the insufficient binding nature of the results of such audits and the implementation of their conclusions. At the same time, the access of external auditors should not be restricted in terms of protecting the rights of online platforms, such as trade secrets, or of third parties in the form of personal data protection. In technical terms, we recommend a ‘sock puppet audit’ as a suitable form of audit, but the above comments need to be incorporated in such a way that the use of bots is possible for external audit purposes in terms of legislation and the terms of service of online platforms. We also emphasise the role of scientific research and the importance of access to data by vetted researchers.

Another important area in tackling disinformation is transparency. In this regard, we welcome the proposals presented in the proposal for regulation on digital services concerning the transparency of advertising and recommender systems and publicly available reports. However, these mechanisms must ensure that their results are meaningful and illustrate the real situation for the professional and lay public. Regarding the transparency of recommender systems, we focus our attention to the fact that the DSA should enshrine a mandatory opt-in for users on first contact with the online platform in any form.  For example, recommendations could consist of two levels: non-personalised and personalised, while the user could, for example, turn on only the non-personalised recommendations, that is, the one that does not take the user’s behaviour (i.e., the source for estimating preferences) into account in the recommendation. At the same time, the obligation for keeping of logs for recommender systems should be stipulated in the DSA for cases potentially not covered by the AIA.

Transparency reports should be complemented by case studies to clarify how the online platform behaves in specific situations and what mitigation measures are being taken. Simultaneously, the statistical indicators in these reports should be divided by Member States so it is possible to detect any differences in the dissemination of disinformation content between Member States, and in the approach of platforms and Member States’ authorities and the measures taken.

It is no less important for social media to regularly evaluate not only the legal, but also the ethical and societal risks. We consider it essential for platforms to be able to devote time and resources to the continuous evaluation of possible impacts in terms of the moral values and principles involved throughout the cycle, from the design of new functionalities to their deployment. It is our belief that ethics risk assessment should be considered a binding part of the conformity assessment for AI system providers proposed in the European Commission’s artificial intelligence regulation.

[1] See e.g.: The European Commission. COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS Tackling online disinformation: a European Approach COM/2018/236 final. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52018DC0236.


Download study: English / Slovak

Cite: Mesarcik, M., Moro, R., Kompan, M., Podrouzek, J., Simko, J., Bielikova, M. Analysis of selected regulations proposed by the European Commission and technological solutions in relation to the dissemination of disinformation and the behaviour of online platforms. March 2022

Autori

Matúš Mesarčík
Ethics and Law Specialist
Viac
Róbert Móro
Researcher
Viac
Michal Kompan
Lead and Researcher
Viac
Juraj Podroužek
Lead and Researcher
Viac
Jakub Šimko
Lead and Researcher
Viac
Maria Bielikova
Lead and Researcher
Viac