Disinfo 2025 Conference: Two Problems, One Solution or How Algorithmic Auditing Addresses Data Access and Systemic Risks

Walking into Cankarjev dom in Ljubljana this October felt different from other conferences I’ve attended. Maybe it was the sheer scale – over 400 people from across the globe, all working on some aspect of countering disinformation and mitigating harmful content. Maybe it was the timing – in a moment when attacks on journalists are escalating, fact-checkers face harassment, and researchers battle with social media platforms to obtain appropriate data access. Or maybe it was just the palpable sense that everyone in that massive brutalist building understood we’re in this fight together.

I was there representing KInIT with Ivan Srba and Dominik Macko, presenting our AI-Auditology work on the panel “What do we do with data?” And what became clear over those two packed days is that our research hits at exactly the problem this community is grappling with: how do you audit platforms when they control access to the very information you need?

Twenty-One Months for What Wasn’t Even What We Asked For

This year the Disinfo conference received more than 200 proposals. It is a big honour that our contribution was one of few finally selected to be presented during the main conference program as a part of the Accountability track. During the presentation and the follow–up panel discussion, I focused on something more fundamental: can we actually study platform algorithms when platforms decide whether we get access?

And here is where algorithmic auditing can kill two birds with one stone. Namely, with algorithmic auditing we can help researchers get data that the very platforms make so difficult to obtain and audit systemic risks. Sounds great, right? So how can we do that?

Photo by Disinfo 2025

Two Problems, One Solution

Obtaining data through official channels and methods is complicated, resourceful and time-consuming. Our very own experience with TikTok proves that. We applied for access to their Research API. After 21 months – almost two years of constant back-and-forth, appeals, justifications – we finally got access to something[1] . A Virtual Compute Environment, or VCE in short. Not what we requested. Not what our research design called for. Just what they decided to give us after Ivan refused to give up and appealed over and over again.

So let’s do it differently. We know that algorithms act as black boxes. We do not really understand them, and if we do want to understand them we have to study them behaviourally. The way we do that is that we build user personas (archetypes with specific characteristics) and observe how algorithms treat them differently. Track what content surfaces. Monitor what gets suppressed. Document the patterns that emerge. This generates rich behavioral datasets, practically data that we collect over time. And this is how we obtain data that the platforms do not want to provide. While such data, inherently from the approach used, do not provide a full image of content present on social media platforms, they have a clear advantage of being associated with user personas such a content is recommended to.

And what is “the second bird”? Well now that we have this rich dataset of algorithmic behaviour with different users that we can examine over-time, we can also examine systemic risks that the DSA audit reports seem not to examine sufficiently and in the granularity needed. In our research, we analysed all published audit reports auditing systemic risks of very large online platforms. We focused on three DSA articles (minors’ profiling restrictions, recommender system transparency, and targeted advertising limits). The key take-away is that the variance of what should have been standardized methodology has been striking.  Some audits barely scratched the surface technically. Others used completely different methodologies for the same questions. There was a temporal blindness in all of them – the audit reports represented point-in-time snapshots and did not represent how the platforms behave over time.

Algorithmic auditing paradigm being researched in the AI-Auditology project can close these gaps. It actually provides a longitudinal, platform-agnostic solution serving as representative audits, and they systematically cover a social media environment and authentically replicate user behaviour.

Questions That Showed We’re Addressing Something Real

After we presented, people kept finding us. Not just polite conference chatter – genuine interrogation of how this actually works. Researchers wanted implementation details. Policy experts asked about our communication strategies with platforms and future plans. Technical folks dove into deeply technical questions, such as the mechanics of sock-puppet accounts that avoid detection.

But what struck all of us was the intensity of interest. People recognized immediately that we’re pointing at a genuine problem. Current auditing approaches aren’t delivering the rigor the DSA calls for. Platform-controlled access is fundamentally broken. If we want real accountability, we need methods that platforms can’t simply deny or delay into irrelevance.

Some conversations turned toward potential collaboration – researchers facing similar access barriers wanting to explore joint approaches. Others were simply trying to understand if this methodology could work for their specific contexts. But the through-line was clear: highlighting these gaps between what audits claim to do and what they actually accomplish isn’t just criticism for its own sake. It’s about building something better.

The buzz around our work showed me that bridging technical execution with policy analysis resonates. You can’t separate them. Policy frameworks without technical grounding become empty promises. Technical capabilities without policy context miss the bigger picture of what accountability actually requires.

Learning Across Tracks in a Packed Schedule

Two days of parallel sessions meant constant choices. I planted myself mostly in the Accountability track, which ranged from methodologies for transforming data into evidence, to frank discussions about how we might “deshitify” our online spaces – a term that got some laughs but pointed at real frustration with current platform design.

Ivan and Dominik, on the other side, attended many technologically-oriented sessions, presentations and panels. Ranging from vulnerabilities and potential misuse of generative AI to technological standards on how to describe disinformation, harmful content and FIMI in general. There was a nice point from a neuroscience point of view (and how memory in our brain works) that it is hard (almost impossible) to convince someone who already consumed disinformation (and believe it) by debunking (and providing counter-facts). It does not, however, mean that fact-checking is a waste of time, maybe we should be more focused on pre-bunking for some information not to be misused.

Multiple presentations examined disinformation-as-a-service – essentially, infrastructure that different actors can leverage for influence operations, with some operations showing potential ties to state actors. The OSINT sessions showcased updated tools for investigation. One talk explored Chinese infrastructure serving both Chinese and Russian disinformation goals (often at the same time).

Across all of it, we kept seeing connections. Someone’s mapping systemic risks while someone else builds enforcement tools. Journalists deploy OSINT while lawyers push for transparency. Civil society demands access rights while researchers develop audit methodologies. Everyone’s tackling different facets of the same core challenge: how do you create accountability in systems explicitly designed to resist it?

And then came a bit of an absurd moment during one of the corporate presentations. A TikTok representative presented on how TikTok maintains a safe environment for all users. The room’s collective skepticism was almost tangible. We’ve all seen the data. We know what actually happens on these platforms. Watching someone confidently describe an alternate reality while researchers holding contradictory evidence sat in the audience felt bizzare.

My Voice in a Language I Don’t Speak

One of the unexpected outcomes from the conference was being featured in a french podcast called Propagations. They approached me after I gave the presentation and later recorded an interview with me in English. Only later, once the episode was published, I found out that the entire episode was translated (there is a female french voice over mine) So, funnily enough, now there’s a podcast episode explaining our research fluently in French – a language I absolutely cannot speak. You can check it out if you’re curious (and francophone).

Architecture as Metaphor

Cankarjev dom is this imposing communist-era structure on Republic Square – pure brutalist architecture with all the concrete and sharp angles that style implies. During our tour, we learned the meeting spaces are built over what were originally bunkers. Cold War infrastructure repurposed for information warfare discussions. The symmetry wasn’t lost on anyone.

Ljubljana itself was a pleasant discovery between sessions. The city has this intimate, walkable quality – rich with historical layers, cozy despite being a capital. Walking around after the full day, exploring the architecture and streets, provided nice contrast to the intensity of conference discussions.

What Stays With Me

Disinfo 2025 conference wasn’t just about panels and presentations – though those were valuable. It was about viscerally understanding that this work doesn’t happen in isolation. When we struggle for platform access at KInIT, so do teams in Austria, Germany, Belgium, Middle East and everywhere else. When we identify gaps in audit methodologies, other researchers are documenting the same problems. When we develop new technical approaches, there’s a community ready to test, critique, and build on them.

For anyone doing platform accountability research, especially in smaller countries, that matters enormously. We’re not fighting these battles alone. We can share strategies, coordinate approaches, support each other when platforms or governments push back. The knowledge exchange that happened in those two days, formal and informal, created connections that extend well beyond Ljubljana.

What we’ll carry forward isn’t just specific technical insights or policy frameworks, though we learned plenty. It’s the reminder that accountability infrastructure requires community infrastructure. No single research team, no matter how well-resourced, can solve these challenges alone. But a network of people with diverse expertise, all refusing to accept opacity as inevitable, all committed to evidence-based understanding of how these systems function – that can actually shift things.

The obstacles aren’t disappearing. Platform resistance continues. Data access remains a nightmare. Attacks on accountability workers are intensifying. But so is the collective determination we witnessed in Ljubljana. The sophistication of methods. The willingness to collaborate. And ultimately the collective refusal to stand down.