April 20, 2026

Very Large Online Platforms decide what counts as systemic risk research. That's self-regulation failing.

AI4TRUST: What building detection infrastructure tells us about the fight for information integrity

Threats to information integrity are cheap to produce; verification is not. A single actor running influence operations with agentic AI can now produce manipulated text, image, audio, and video faster than entire newsrooms can debunk it.

What AI4TRUST delivered

AI4TRUST was a Horizon Europe-funded project running from January 2023 to February 2026. It brought together 17 organisations across 11 countries: researchers, developers, fact-checkers, journalists, and media professionals. The aim: shared infrastructure to detect and assess manipulated content at scale.

The project delivered two main components: the Toolbox and the Monitoring Dashboard. The Toolbox allows users to analyse individual items (text, image, audio, video) using AI modules, including deepfake detection, reverse video search, and the detection of mismatches between images and the claims they accompany. The Monitoring Dashboard supports continuous content collection from platforms, automated risk scoring, and human validation in a single workflow. GDI developed the decision-support system at the core of the Monitoring Dashboard. It aggregates multiple signals from the detection modules into a composite, risk indicator, trained on diverse content annotated by fact-checkers in the consortium. The system is designed to surface potentially high-risk content patterns for further human review.

Where platforms undermine detection

Detection tools are only as useful as the data they can access. The consortium evaluated data access across major platforms and found a fragmented landscape. YouTube, Telegram, and later Bluesky proved workable. Others did not.

The timing shaped everything that followed. As the project coordinator noted, the project was accepted the same week Elon Musk acquired Twitter/X. Generative AI went mainstream weeks later. Over the following three years, both access and the threat environment kept shifting, and not for the better.

X was the most visible break. After the acquisition, X ended free API access. A formal DSA Article 40 application followed in March 2024; it was rejected the following month on the grounds that the proposed research did not solely contribute to understanding systemic risks under DSA Article 34. Meta rejected a formal application in December 2024, citing that building a detection tool on its data constituted a derivative work, prohibited under its product terms regardless of commercial intent. TikTok offered only limited access. Informal contact suggested a formal application was unlikely to succeed, and the consortium did not pursue it.

⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻ DSA Article 40: the numbers

Of 46 applications registered by the DSA 40 Collaboratory, 34 have been decided, 20 accepted, 14 rejected. The most common rejection reason: the research does not address systemic risks. This amounts to platforms deciding for themselves what constitutes systemic risk research. ⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻

Under DSA Article 40, platforms must provide researchers studying systemic risks with data access. In practice, the mechanism is failing. Even a well-resourced EU consortium cannot reliably secure platform cooperation. But restricted access is only one way platforms undermine detection efforts. Platform recommender systems rank content by engagement, which means manipulated content that provokes reaction is amplified to wide audiences before any detection system can flag it. Public funding builds detection tools; platform business models ensure there is more to detect.

What should change

At the closing event in Brussels, the consortium's message to policymakers was direct: the detection infrastructure can be built. But technical capability alone is not enough. AI4TRUST's experience reveals concrete gaps in the EU's regulatory architecture that require urgent attention. The following recommendations are addressed to EU institutions, Member States, and the Digital Services Coordinators responsible for DSA implementation.

⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻ Recommendations

Reform DSA Article 40 vetting to match research timelines. The current process is structurally incompatible with fixed-term Horizon projects. The Commission should mandate a maximum 60-day approval window, with fast-track provisions for pre-approved academic consortia and a standardised data-sharing protocol that platforms cannot unilaterally narrow.

Integrate FIMI explicitly into DSA systemic risk assessments. VLOPs should be required to report FIMI-specific indicators in their annual risk assessments, and the Commission should issue implementing guidance linking the EEAS FIMI taxonomy to DSA Article 34 obligations.

Mandate interoperability of detection infrastructure. Tools developed through publicly funded projects such as AI4TRUST should not be siloed. The Commission should require that VLOPs provide API-level compatibility with approved third-party detection systems, reducing dependence on platform goodwill and enabling continuous, independent monitoring.

Address amplification incentives as a risk vector. Algorithmic amplification is not a neutral feature; it is a material enabler at scale. DSA Article 27 obligations on recommender system transparency should be strengthened to require disclosure of amplification thresholds and audit access for approved researchers investigating coordinated inauthentic behaviour. ⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻⸻

These measures are mutually reinforcing. Without data access reform, detection tools cannot operate. Without FIMI-specific risk categories, DSA enforcement will remain too generic to counter state-sponsored information operations. The AI4TRUST experience makes the case: technical capability exists; the regulatory framework must now match it.

AI4TRUST was funded by the European Union's Horizon Europe programme under Grant Agreement No. 101070190. For more information, visit ai4trust.eu.

Photo by Alexandre Lallemand on Unsplash