A Game for Crowdsourcing Adversarial Examples for False Information Detection
Čegiň, J., Šimko, J., Brusilovsky, P.,
Abstract: False information detection models are susceptible to adversarial attacks. Such susceptibility is a critical weakness of detection models. Automated creation of adversarial samples can ultimately help to augment training sets and create more robust detection models. However, automatically generated adversarial samples often do not preserve the information contained in the original text, leading to information loss. There is a need for adversarial sample generators that can preserve the original information. To explore the properties such generators should have and to inform their future design, we conducted a study to collect adversarial samples from human agents using a Game with a purpose (GWAP). Player’s goal is to modify a given tweet until a detection model is tricked thus creating an adversarial sample. We qualitatively analysed the collected adversarial samples and identified desired properties/strategies that an adversarial information-preserving generator should exhibit. These strategies are validated on detection models based on a transformer and LSTM models to confirm their applicability on different models. Based on these findings, we propose a novel generator approach that will exhibit the desired properties in order to generate high-quality information-preserving adversarial samples.
Cite: Čegiň, J., Šimko, J., Brusilovsky, P., A Game for Crowdsourcing Adversarial Examples for False Information Detection. AIofAI ‘22: 2nd Workshop on Adverse Impacts and Collateral Effects of Artificial Intelligence Technologies, CEUR-WS.org (2022).