What's
Facebook fills the world with hate speech. What to do about it?
You may remember getting angry while reading your Facebook feed sometimes. Maybe it happens every day. Or you are one of those people who feel helpless at the sight of so much hatred. You leave social networks with the feeling that the world is just getting worse and there is no solution to it.
However, the opposite is true. If you don’t believe it, you can read books like Factfulness or Humankind: A Hopeful History. Both books discuss the positive side of mankind and prove that the world is a better place to live than it used to be. Objectively, if the world is still getting better, then why is Facebook full of hatred?
The answer lies partly in our perception. We are used to remembering rather negative facts, the media are partly to blame. Negative news articles get higher engagement, so they publish them readily and more often.
If we thought that we would be exposed to completely different, “uncensored” content on social media, we could not have been more wrong. Social media interconnectedness won´t save us, moreover, it makes hatred and negativity more visible, even stronger.
Hate speech is very closely related to misinformation that we deal with in the CEDMO hub project. We consider hate speech as one of the indicators that point towards a dubious source of information. In the first part, we will specify what hate speech is and then we will explain how Facebook contributes to it.
What is hate speech?
Hate speech is not a new phenomenon in society. Attacks on various minorities and vulnerable groups have happened even without the help of social media. And we already know the consequences of these attacks – whether it was a witch hunt or the Holocaust during the World War II. Hatred is especially dangerous because it polarizes the society, at first cunningly and unobtrusively, then more and more aggressively.
First, let’s have a look at what exactly hate is. We will use the definition that we published in our study that was created in cooperation with the Center for Social and Psychological Sciences of the Slovak Academy of Sciences. We used the so-called indicators to create it. Indicators are observable and measurable hate characteristics, such as the presence of vulgarisms or incitement to violent behavior.
If you see a group of people on the Internet being attacked based on:
- gender
- sexual preference
- race
- nationality / ethnicity
- religion
- age
- any impairment
When these groups are being defamed, ridiculed, humiliated and intimidated, it is considered to be hate speech. If human rights are unobtrusively denied, if we unnecessarily generalize about and stereotype a specific group of people, or any attempt to restrict these people is considered to be hate speech.
Not to mention a direct call for violence, or if this group is offended verbally, or if they are called by disgraceful nicknames.
Attacking groups of people based on their characteristics makes hate speech different from bullying, which is another negative phenomenon in the online world. Bullies usually choose individuals and the range of reasons for bullying can be different, even wider as in comparison with hate speech.
Interestingly, both of these types of antisocial behavior are often paved with “good intentions”, such as protecting one’s own group from the “dangers” of the other.
How Facebook promotes hate speech and misinformation
In 2016, Mary Aiken pointed out in her book The Cyber Effect that the online world – and Facebook is part of it – normalizes the abnormal. Behavior that would take place on the periphery and that would remain a matter for a few individuals gets normalized by social media and its reach to a wide audience.
People form and strengthen their opinions in similarly minded groups, whereas one influencer is enough to spread these convictions around the world. This happens because of the very nature of social media.
Emotions are another hate speech stimulator. Until recently, it was thought that information full of anger or astonishment was naturally shared more frequently.
Facebook whistleblower Frances Haugen has proven that Facebook is consciously amplifying these emotions. You probably remember when Facebook introduced new emoticons for reactions in 2015. It turned out that the company also used these emoticons to manipulate the information we are exposed to. According to the leaked documents, Facebook’s algorithm rated statuses that were tagged with emoticons (such as the angry one) as five times more valuable than those that received just “Like”. Emotional emoticons were rated as 30 times more valuable.*
In practice, this means that expressions arousing anger were displayed to people five to thirty times more often than neutral ones. In reality, however, people used the angry emoticon the least, only 429 million times a week. To compare, they “liked” something 63 billion times and “loved” something 11 billion times.
Even the Facebook scientists couldn’t ignore such content manipulation. They found out that the anger-expressing statements very often included misinformation (especially medical ones, such as those about vaccination) and toxic or low-value content. But Facebook made no amendment.
Profits were probably the reason for inaction, as emotional content increases user engagement. That has changed when they found out that users didn’t like the fact that their posts had angry reactions. Only then did the management finally decide to modify it.
How the decisions of Facebook affect three billion people worldwide
Users’ exposure to misinformation and violent content has actually decreased after reducing the reach of posts tagged with angry emoticons. Interestingly, these adjustments did not reduce the social network’s profits at all. It was another unnecessary Facebook social experiment on three billion people.
However, years of active emotion manipulation on Facebook have left their mark on society even outside the online environment. We can see that angry communication and hate speech have penetrated deep into human communication on the largest social network and literally became the norm. Under no circumstances can such behavior remain the norm.
We are witnessing how the leaders of different countries commonly use hate speech in their public appearances. Populist leaders slowly make us used to manifestations of sexism, xenophobia, or personal attacks on people’s real or possible motivations.
How to get out of it
Considering individuals, it is not such a problem to avoid hate speech on Facebook – just stop following some sites and “friends”, or do not respond to problematic online behavior. However, only avoiding or ignoring hate speech does not solve the problem. On the contrary, without opposing views, aggressors’ hatred gets even stronger. This is another reason that made us write about the fact that the algorithms of large social platforms need independent supervision, which is also enforced by the European Commission.
There are countless initiatives that are trying to mitigate the negative effects of hate in the online environment. They are primarily independent and verified checkers of unfavorable content (so-called trusted reporters), researchers with background in social and technical sciences or non-profit organizations. In extreme cases, the legislation defines the legal consequences of spreading hate.
At KINIT, we also focus on the possibilities of hate speech detection using technologies based on artificial intelligence. This is what the following article will be about.
*In 2017, when this feature was introduced, Facebook informed the media that emotive content was valued “only slightly more than likes”.