June 22, 2022

Disinformation as Adversarial Narrative Conflict

By Danny Rogers

The hardest part of combatting disinformation is defining it. Whether one works in trust and safety at a major platform, codifying new regulations as a global policymaker, or rating open web domains for disinformation risk, working from a common comprehensive definition is paramount.

But defining disinformation often feels akin to catching smoke. Add to it the complication of parochial commercial interests or even the malicious actors themselves and you get a picture of why a good definition is so difficult to pin down.

The Santa Claus Test

One common pitfall when attempting to define disinformation is the tendency to get bogged down in semantic arguments. One such argument is the distinction between so-called “misinformation” and “disinformation,” where “misinformation” is reserved for anyone unintentionally sharing incorrect information, while “disinformation” is used to identify intentional falsehoods.

There are a number of issues with this argument, but the most glaring one is that it doesn’t pass what we at GDI call the “Santa Claus” test. If disinformation were as simple as someone intentionally lying on the internet, then we’d be clamoring to moderate every mention of Santa Claus or the Tooth Fairy off the web (sorry to disappoint anyone). The NORAD Santa Tracker would be categorized as a disinformation operation.

At the same time, definitions that overly rely on true versus false dichotomies – or related solely on fact checking — miss some obvious examples of disinformation. One such scenario would be an instance where a malicious actor was crafting a misleading narrative by selectively presenting cherry-picked elements of fact without providing a complete picture. A great example of this kind of activity is the “Illegal Alien” tag on Breitbart News.

Photo shows example of Breitbart's "Illegal Alien" section. The framing of these stories shows the limitations of fact checking as the standard for determining disinformation.

This section of the infamous Brietbart website is chock full of local crime stories whose headlines mostly start with the phrase “Illegal Alien…” Each individual story would likely fact check to be technically correct, in that the crime did happen and the alleged perpetrator was likely an undocumented immigrant. But by selectively presenting these stories together (and, as an aside, by using the emotionally charged phrase “Illegal Alien”), Breitbart is peddling an overarching misleading narrative that undocumented immigrants disproportionately commit crimes, something that is statistically incorrect. While most of us would recognize this as disinformation, it would technically pass a test based on true-false fact checking.

Adversarial Narratives — A New Framework for Disinformation

In 2019, GDI published an in-depth report called Adversarial Narratives: A New Model for Disinformation, that laid out the foundation for a new definition of disinformation, one that captures the nuance of the above examples. We consider this definition to be one of our most innovative contributions to the counter-disinformation space, and it underlies everything that we do, both using human-powered research and automation.

Revisiting our above example of Breitbart’s “Illegal Alien” page, what is it exactly about this page that makes this disinformation? As stated above, it’s less about the individual stories or facts, and clearly more about the overarching narrative. That narrative is intentionally misleading, and more importantly, adversarial in nature against immigrants. In fact, it is also the kind of content that can lead to anti-immigrant violence like the El Paso Walmart shooting in 2019. Which brings us to our more useful definition of disinformation.

At GDI, we view disinformation through the lens of adversarial narrative conflict. Anywhere someone intentionally peddles a misleading narrative, often implicit and constructed using a mix of cherry picked elements of fact combined with fabrications, that is adversarial in nature against an at-risk group or institution, and most importantly, creates a risk of harm, they are engaging in disinformation.

At-risk groups or institutions generally, in our view, fall into three broad categories: at-risk demographics, institutions like science or medicine, or elements of democracy. Examples range widely from immigrants in the above example to protected classes like women, persecuted minorities, people of color, the LGBTQ+ community, children, etc., to scientific or medical consensuses on topics such as climate change or vaccines, to democratic processes like voting or the judicial system. And the harms can range from risks of financial damage to illness, violence or even death. In fact, in our work for the Christchurch Call, we’ve laid out an example matrix of possible harms stemming from disinformation that can serve as a useful model for trust and safety teams and global policymakers.

As you can see, this definition transcends simple true versus false dichotomies. And it goes well beyond fact checking to assess an overarching narrative’s risk of harm to vulnerable populations or institutions. And it clearly illustrates why the story of Santa Claus does not fit — it is neither adversarial nor harmful — while a cherry picked collection of factually accurate stories about crimes committed by immigrants, does.

Policy Interventions to Address Disinformation

One area where a comprehensive and coherent definition of disinformation is critical is in the global regulatory arena. The recent move for regulatory intervention in Europe across the ad tech industry is slated to ensure the protection of fundamental rights online and set impactful measures for addressing illegal content and societal risks. Demonetisation, which we strongly support, presents a pathway that protects free speech and choice, while addressing the perverse incentives that drive the corruption of our information environment. The Digital Services Act (DSA), the Digital Media Act (DMA), and the recent adoption of Code of Practice on Disinformation (“CoP'') promise to transform the online landscape in the EU and globally with measurable commitments to restricting advertising on pages and domains that disseminate harmful disinformation. However, the success of its implementation will depend on a comprehensive definition that transcends overly simple true-false dichotomies and encompasses all forms of adversarial narrative conflict of the kind we describe here. Our hope is that a definition compatible with what we’ve outlined becomes the basis for this and future regulatory efforts around the world. 

Toward Our More Comprehensive Definition

We have said repeatedly that the counter-disinformation community needs to move beyond semantic arguments, overreliance on fact checking, and content moderation. Our comprehensive definition of disinformation as adversarial narrative conflict does just that. It provides a framework for understanding the broader universe of disinformation techniques, all of which are centered around crafting and peddling adversarial narratives. It illustrates the role that algorithmic recommender systems play in exacerbating the problem, since adversarial narratives exploit our human tendency toward negative content and thus disproportionately drive engagement on algorithmically-driven platforms. That engagement results in more ad sales. While humans at an outlet like Breitbart may curate a section of their website into stories about “immigrant crime,” algorithmic news feeds do that same thing billions of times a day for over half the world’s population, crafting automatically generated, highly personalized adversarial content streams that keep users engaged, on platform, and monetized, and in the end corrupt the entire global information ecosystem. Our hope is that by defining the problem in this more comprehensive way, we can catalyze industry and global policymakers to take the necessary actions to disrupt disinformation and its ensuing harms.