Towards Continuous Automatic Audits of Social Media Adaptive Behavior and its Role in Misinformation Spreading
Simko, J., Tomlein, M., Pecher, B, Moro, R., Srba, I., Stefancova, E., Hrckova, A., Kompan, M., Podrouzek, J., Bielikova, M.
In this paper, we argue for continuous and automatic auditing of social media adaptive behavior and outline its key characteristics and challenges. We are motivated by the spread of online misinformation, which has recently been fueled by opaque recommendations on social media platforms. Although many platforms have declared to take steps against the spread of misinformation, the effectiveness of such measures must be assessed independently. To this end, independent organizations and researchers carry out audits to quantitatively assess platform recommendation behavior and its effects (e.g., filter bubble creation tendencies). The audits are typically based on agents simulating the user behavior and collecting platform reactions (e.g., recommended items). The downside of such auditing is the cost related to the interpretation of collected data (here, some auditors are advancing automatic annotation). Furthermore, social media platforms are dynamic and ever-changing (algorithms change, concepts drift, new content appears). Therefore, audits need to be performed continuously. This further increases the need for automated data annotation. Regarding the data annotation, we argue for the application of weak supervision, semi-supervised learning, and human-in-the-loop techniques.
Cite: Simko, J., Tomlein, M., Pecher, B, Moro, R., Srba, I., Stefancova, E., Hrckova, A., Kompan, M., Podrouzek, J., Bielikova, M. Towards Continuous Automatic Audits of Social Media Adaptive Behavior and its Role in Misinformation Spreading. 29th ACM Conference on User Modeling, Adaptation and Personalization – UMAP’21 (2021). DOI: 10.1145/3450614.3463353
Related blog: Large platform algorithms need independent oversight