What's
ELV.AI: Protecting meaningful online discussions
In today’s online world, social media platforms can be a breeding ground for harmful content, ranging from the spread of disinformation narratives to hate speech in the comment sections of social platforms. As social media and online platforms often shape discourse in nowadays society, the need for effective moderation of their content is important. With the combination of AI and human moderation, Elv.ai helps its customers to effectively moderate harmful content and at the same time promotes a safer and more respectful online environment. However, AI moderation also brings various ethical and societal questions that need to be answered before such AI systems begin to transform discussion on social media platforms.
Our ethics team has prepared a series of collaborative workshops during which we conducted the ethics-based assessment of Elv.ai, the AI system for online content moderation. Elv.ai system is designed to automatically detect and filter out harmful content, such as hate speech, disinformation, and offensive language. It understands complex contexts like slang and irony, ensuring high accuracy in content evaluation. Additionally, it supports native-language moderation, effectively handling cultural nuances. This tool helps maintain safe, respectful online environments while saving up to 30% of social media management time, making it valuable for media and public service organizations.
The ethics-based assessment of Elv.ai consisted of a series of six facilitated meetings, held from April to September. This process involved seven team members from Elv.ai and four specialists on trustworthy AI as facilitators from KInIT. Additionally, one of the team members was selected to act as the project owner and collaborate with facilitators to prepare for each session. Participants in the workshop had varying levels of expertise (CEO, CTO, chief marketing officer, chief sales officer), which offered a unique perspective during the meetings.
Before the assessment, Elv.ai proactively identified several potential ethical concerns that their AI system may raise. These included the risk of censorship through moderation, data privacy issues and unjust classification of users. They also raised ethical concerns about automation bias, especially regarding over-reliance on AI system decisions by end-users. Together we framed these concerns and addressed them in the broader concept of Trustworthy AI as was coined in the Ethics Guidelines for Trustworthy AI (EGTAI) and Assessment List for Trustworthy AI (ALTAI).
First, we analyzed the company’s ethical and societal issues with respect to their possible impacts on various direct and indirect stakeholders. We find out that company management and annotators (“elfs”) belong to highly impacted direct stakeholders with high awareness of the possible impact. The best effort must be made to engage these stakeholders from the early stages of the development process to understand their needs correctly. However, Owners (clients), moderators and administrators, social media users and application providers are also highly impacted stakeholders but with low awareness of the possible impact. Even if not involved directly in the process, the organization should make an effort to find out how they perceive the impact of the AI system on their lives.
The development, deployment, and use of the Elv.ai system brings ethical and societal risks that can have some impact on stakeholders. Based on our own methodology that leverages EGTAI and ALTAI, we were able to identify, that most of the ethical and societal issues were tied to Technical Robustness and Safety (regarding the custom blacklists, inaccurate user categorization or excessive hiding), Transparency (mainly regarding the partial knowledge of system capabilities and lack of AI awareness in use) and Privacy and Data Governance (mainly regarding the processing of data from minors, group privacy or handling and sharing sensitive personal data). There are also considerable risks regarding Societal and Environmental Well-being (deterioration of annotators’ well-being, deliberate misinterpretation), Diversity, Non-discrimination and Fairness (unclear fairness definition, language bias, representation bias and data diversity) and Human Agency and Oversight (mainly regarding the occurrence of over-reliance and automation bias). The debate with the Elv.ai team also addressed the tension between freedom of speech and the right of the audience to be protected from misinformation.
Regarding the ethics-based assessment, Elv.ai expressed a strong desire to identify and mitigate the ethical and societal risks of its system and its impact on stakeholders. The next step in mitigating ethical and societal risks identified in the ethics-based assessment should be to focus on the most imminent risks from the Risk list (2 Very High risks and 18 High risks). From a broader perspective, there are some topics that could be embraced by the company to achieve a more trustworthy system. First, to implement a process for validation of risks and countermeasures ranging from stakeholders engagement workshops to validate the outputs of the assessment process (including not only direct stakeholders but also highly impacted indirect stakeholders), avoid blind spots and measure the impact of proposed countermeasures. Second, to implement transparent communication ranging from creating transparency notes that will explain the benefits and limitations of the system to affected stakeholders in an appropriate language or internal checklists and guidelines that will help employees, to other communication activities that support the willingness of the company to provide trustworthy systems and to avoid public miscommunication.
The Elv.ai team fully cooperated during the whole assessment process, including the preparation and evaluation phase, and was able to provide a multidisciplinary team willing to cooperate with the facilitators. Their enthusiasm for tackling different challenges and addressing tough questions led to a smooth and productive discussion during the workshops.
“I really appreciate this collaboration. Many new perspectives that I didn’t even think of, I wouldn’t have thought of, were opened up during the assessment. There were a lot of things that were on our minds before the assessment process even started but we couldn’t grasp them precisely. The fact that KInIT experts were able to give our intuitions some form and structure helped us to understand what we needed to change to achieve trustworthy content moderation.”
Miroslava Filčáková
Chief Technical Officer at elv.ai