Why viral posts on social media might be hurting us the most

Billions of people scroll through social media every day, encountering different posts. We often assume that the posts we see are the most important or entertaining ones, just because they are the most seen, shared or liked ones. But what is it that actually makes the posts popular?

Social media platforms are designed to maximise one thing in particular above all else: engagement. Motivating people to comment on the post, share it with their friends or other users, like it or otherwise react to it. The more engagement there is, the longer we as users will stay scrolling through the platforms, which in turn equals more revenue for the platform (through ads, for example). 

However, there is a darker side to this incentive of maximising engagement, and it has everything to do with what makes us as users pause our scrolling and engage with the post. Studies in behavioural psychology show that emotional content, especially negative emotions like fear and anger, motivates action. That is exactly what the platforms utilise, as it means more clicks, more shares, and more time spent scrolling. A few years ago, there was even an article from the Washington Post revealing that the Facebook recommender systems rank higher posts that contain angry emojis.

Unfortunately, this emotional leverage is often exploited by different actors seeking to inflame divisions, by disinformation campaigns pushing false narratives, or simply by content creators who understand the algorithmic game and wish to gain more revenue or engagement. In the current world where algorithms are trained to chase engagement above all else, emotion becomes a weapon, and the battlefield is our society. And we can already see the effects of this in recent years – highly polarised society through different issues, such as political narratives (e.g., politics going from in-depth and civil discussion to screaming contests), health-related issues (e.g., vaccination or the COVID-19 pandemic, where many posts utilised the narrative of “X is harmful to children” to cause harm in order to gain something from it), online harassment or increase in violence (e.g., assassination attempts). Although social media do not directly (or purposefully) create these divisions, they magnify and monetise them.

As such, the most viewed posts are not the most informative, but instead the most provocative, polarising and otherwise problematic – and it has consequences far beyond the screen.

What can be done with this?

The problem is not that social media platforms exist, nor is emotional content performing well. The problem is that platforms have little transparency and virtually no accountability when it comes to the algorithms shaping our information landscape. For example, if food companies had an algorithm that nudged people to eat toxic substances because it increased profit, we would not let them regulate themselves. Instead, we would demand public oversight, transparency and regulation.

Social media should be no different, and we need some changes, such as:

  • Independent algorithm audits to examine what content is prioritised on the platform and what harm it may cause.
  • Policy frameworks that hold platforms accountable for amplifying harmful or misleading content and force them to make changes if something harmful is happening.
  • Transparency rules that require platforms to explain how they rank and recommend different posts.

And this is exactly what our AI-Auditology project is all about – auditing social media platforms, uncovering the harms they cause and in turn allowing policymakers to enforce changes.