Code against Hate II
Public communicators, influencers, popular brands, journalists are targeted with online hate speech on a daily basis. Is it possible to fight it?
During the last weekend, we actively participated in the online hackathon Code against hate II. From KInIT participated our colleagues Ivan Srba as a keynote speaker and member of the jury, Matúš Pikuliak as a mentor, Tomáš Gál also as a mentor and Juraj Podroužek as a jury member, who also prepared a short workshop on the use of tools for ethical IT design.
The hackathon was attended by almost thirty young experts from different parts of Europe in the field of data science and machine learning, but also with a focus on humanities. The teams actively used the ethical design tools made available from us and tried to design their solutions from the perspective of the various groups concerned, or to reflect on important topics of AI ethics, such as explainability of AI decisions or degree of human autonomy.
We met enthusiastic young people who care about improving communication on the Internet. They decided to make a practical contribution to this problem by creating tools to expose hate speech. In addition to the technical side of the solution, it was also important to be aware of the ethical implications of using these tools.
Emphasis had to be placed on the process explainability of evaluating hate speech. It does not automatically lead to censorship of the article or the author, but rather to an explanation of why the article proposed by the author is inappropriate.
As a mentor, I talked to the teams especially on Friday, in their design phase. From a technical point of view, I noticed that all teams automatically used pre-trained language models. It is relatively advanced technology, which we also deal with in the NLP team at KInIT.