September 7, 2022

Ad Tech Policy and Enforcement Gaps: Challenges and Solutions

From policy to practice — closing the gap

The Internet and the digital world are rapidly evolving, and the time for policymakers, companies, and citizens to demand change is long overdue. Governments and regulatory bodies on a global level are developing frameworks to tackle the monetisation of disinformation in response to ad tech’s failed attempts at self-regulation.

To help contextualise and propose concrete recommendations, The Global Disinformation Index analysed current ad tech policies and their enforcement across the present policy landscape. 

GDI’s assessment approach

Understanding GDI’s adversarial narrative conflict framework is critical to translating the findings within this report — and more importantly to tackling today’s constantly evolving, complex online threat landscape. This landscape features tools and actors that can lead to abusive and harmful behaviours which often slip through the gaps of current monetisation and content moderation policies.

Overly simplistic definitions of disinformation rooted in fact-checking and “verifiably false information” are insufficient to enable demonetisation of harmful content. These definitions also create gaps for intentionally misleading narratives, especially when those narratives are crafted using cherry-picked elements of the truth.

Utilising the lens of adversarial narrative conflict — which goes beyond fact-checking or overly simplistic true vs false dichotomies — provides a more comprehensive basis for understanding disinformation tactics and risks. 

Based on this framework, GDI tracks more than 20 adversarial narrative topics (such as climate change denial, voter fraud, antisemitism) and continuously monitors the supply policies of 44 ad tech companies (companies that provide the software and tools that are used for the placement, targeting, and delivering of digital advertising).

Findings

For this study, GDI’s analysis of 44 ad tech companies in its database focused on 15 different adversarial narrative topics. Our findings include:

  • Most supply-side platforms (SSP), ad exchanges and ad networks are lacking publisher policies that would enable them to demonetise the full spectrum of adversarial narrative topics; 
  • 17 companies do not have policies covering any of these 15 narratives;
  • Some of the ad tech companies studied have no policies at all making it hard for them to tackle even the most basic disinformation content;
  • 26 companies’ policies adequately address at least 1 narrative. However, in most instances policy wording is vague and attempts to cover multiple different types of disinformation under the one policy;
  • Only three ad tech companies (Google, OpenWeb and Magnite) have a policy for each of the 15 selected disinformation narrative topics;
  • By monitoring ads displayed on disinformation websites and identifying which ad tech company served each ad, GDI has recorded numerous instances of publisher policy infringements;
  • GDI has found evidence of 25% of the ad tech companies analysed infringing at least one of their own policies;
  • 33% of studied ad tech companies either did not have policies publicly available or included coverage of adversarial narratives tracked by GDI.

Figure 1. Sample of publisher policy coverage on six adversarial narrative topics

Chart showing a sample of publisher policy coverage on six adversarial narrative topics. Only Google, Criteo and Magnite have policies on all six adversarial narrative topics.

GDI's research shows that the supply quality policies ad companies have in place are often incomplete and are not comprehensive enough to address all types of disinformation. Additionally, these policies are rarely updated to capture new or evolving adversarial narratives as seen in the case studies within this report.

Figure 2. Google continues monetisation of anti-Ukrainian content on OpIndia.com

The image shows an example of Google placing a Puma ad on disinformation despite having a policy against it.

Conclusion and recommendations:

International norms on best practices regarding our online space are in the process of being created by governments, private companies, citizens, and civil society organisations.

The potential to reform the disinformation ecosystem is close at hand but only if regulations and policies are enforced. 

How can these groups achieve this important aim? To combat disinformation and protect our online and offline world, we must create a stronger regulatory regime which includes, but it not limited to, the following:

  • The use of the adversarial narrative framing within regulatory initiatives such as the Digital Services Act to tackle the scope of harmful content.
  • Policies must target the monetisation of disinformation and disrupt the financial incentive for creating such harmful content.
  • The financial incentive — engagement, page views and advertising revenue driven by divisive content — must be removed.  
  • The disinformation risk assessment of sites must be provided by neutral independent third parties with no stake in the current ad tech ecosystem.
  • The sites found to have the highest adversarial narrative density should be demonetised at the site level as a page level approach is insufficient.
  • Regulation must take an industry-wide approach, targeting the wider ad tech industry and serve to set a regulatory floor.
  • It is vital that policies create an independent scrutiny mechanism to assess the commitment level of relevant parties.
  • A transparency measure to foster compliance could include a repository of policies for platforms and the ad tech industry.

Enforcement remains the key challenge going forward.  The regulatory shift to creating new transparency obligations will bring accountability to the ad tech industry,  and address the opaqueness associated with online advertising, as well as bring independent expertise into the assessment of online content online. All stakeholders must work together to develop a long-term and industry-wide solution to end the monetisation of harmful disinformation.

For more information on GDI’s recommendations, please download the full report.

Related Content

Research

Disrupting Disinformation: A Global Snapshot of Government Initiatives

GDI has examined the current legislation approaches of a dozen countries to address the problem of disinformation. Our study provides an overview and captures the gaps in the approaches of these governments that need to be addressed.

Find out more
Research

Bankrolling Bigotry: An Overview of the Online Funding Strategies of American Hate Groups

The Global Disinformation Index (GDI) and the Institute for Strategic Dialogue (ISD) have analysed the digital footprints of 73 US-based hate groups, assessing the extent to which they used 54 online funding mechanisms. This research aims to map out the online infrastructure behind hate groups’ financing and fundraising in order to support efforts to defund and therefore disempower hate movements in the U.S. This research was graciously funded by the Knight Foundation.

Find out more