June 13, 2019

Disinformation and Networked Conflicts: Fifth Generation Warfare

The landscape of today’s disinformation conflicts was envisaged over 20 years ago by visionary authors and thinkers, including retired US Air Force Colonel Richard Szapranski.

In 1995, Szapranski forecasted that successful information warfare campaigns have and will continue to impose a falsely constructed reality on human targets by attacking both knowledge and belief systems.

Szapranski’s work laid the foundations for the development of a paradigm that would later be known as fifth generation warfare (5GW). Roughly 15 years later in 2009, technology scholar Umair Haque described 5GW, “4G war was network against state. Think Al-Qaeda vs America. 5G war is network against network, market against market, community against community.”

That same year, a Wired article by David Axe on fifth-generation wars offered additional insights which are chillingly accurate in retrospect, “It will be ... a ‘vortex of violence,’ a free-for-all of surprise destruction motivated more by frustration than by any coherent plans for the future.”

Today, the correlations between 5GW and today’s online threats and disinformation campaigns are clear.

For the GDI, we see this overall paradigm as a conflict, rather than a full-blown war. It is a networked conflict with many fronts and unclear actors who rely on multiple networks of distribution.

Ultimately, this new frontier is based on an adversarial narrative. Here, we define narrative as:

“An account of sequential or thematically connected events told in story form and embedded within a cultural context.”

An adversarial narrative - created via online content and actions - aims to stir conflict or opposition between groups.

It taps into the underlying and perceived grievances of a populace and goes after the foundational pillars of government and society.

The example of the pushback against fifth generation (5G) mobile represents one such case, as we highlight in our recent blog post.

And as a conflict, a wide range of hybrid threat actors are weaponising false and malicious information: state actors, private influence operators, grassroots trolls and pure rent seekers (i.e. those just out to get clicks to ad money).

As we outlined in our white paper, these agents can be organised by motivations (from political to financial) and degree of structure (from highly centralised to decentralised). What they have in common is that they all abuse and exploit adversarial narratives across the web ecosystem.

Hybrid threat actors also share some general characteristics that define them as a group (see figure 1)

Figure 1: Hybrid Threat Actors: Common Characteristics

  • Ephemeral: Threat actors may move very rapidly, and may leave only ephemeral artifacts of short duration (e.g., platform suspension, or self-deletion).
  • Gradient of coordination: Coordination can range from little to none, through to active planning and coordination of activities as a group.
  • Blended authenticity: Hybrid threat agents may combine authentic elements (identities, accounts, beliefs, grievances, real news stories, etc.) with inauthentic elements (fake accounts, satire, false news, etc).
  • Cross-platforms: Attacks may be distributed across multiple accounts and platforms, and where platform enforcement actions occur, they may migrate readily to other platforms to continue.
  • Peer-to-peer marketing: Threat actors make use of ads, social media posts with no placement cost, and peer-to-peer marketing (influencers).
  • Global partnerships: Threat actors may themselves be geographically diverse and distributed. They may be state actors in origin (including from overtly hostile as well as allegedly ‘friendly’ allied nations).
  • Tacit approval from state actors: Threat actors may receive a range of backing from tacit approval through to material support from states and quasi-state foreign powers.
  • Financial motivation: Threat actors or private influence networks for hire may run ad networks of distribution, sell merchandise, or receive financial support from their audiences.
  • Bypass moderation filters: Threat actors may operate just below the threshold of platform rules enforcement - usually intentionally - often through coded language and in-group references (e.g., dog-whistling).
  • Online and offline activities: Activities may consist of a blend of both online and offline actions (e.g., an online campaign supported by allies for and against an issue; violent or threatening offline acts).

In today’s disinformation landscape, disinformation actors leverage and exploit network dynamics of platforms to broadcast their message (e.g. gamification), recruit new members into their radicalisation funnel, and through them, re-broadcast their message to continue the cycle of the “digital influence machine.”

Disinformation agents, both domestic and foreign, have a large library of content to draw from, recycle, or launder to craft new adversarial narrative campaigns intended to delegitimise institutions, practices, policies, and political officials. This has been highlighted in a recent report about disinformation campaigns by Russia to undermine the US justice system.

In practice, we have seen these campaigns to be less about pieces of content that are pure disinformation. Rather, it is a slow and steady diet of manipulated half-truths and veritable information. This eventually crescendos into a larger disinformation campaign when a news cycle opportunity appears on a thematic issue that has already been seeded by disinformation actors.

We see the critical need to better map and track these networked conflicts and their use of adversarial narratives. The aim is for the GDI to expand its work on adversarial narratives and to provide policy guidance to platforms and governments for how to find, intervene, and disrupt them. Please continue to watch this space.