July 23, 2021
Ultimately, the utility of adopting the marketing funnel model for radicalisation is that it allows social media product features to be clearly mapped to stages of the radicalisation process, suggesting likely locations and methods for disruption and prevention.
We recognise that CVE is a multi-pronged mission involving a variety of stakeholders in governance, technology, law enforcement and civil society. We adopt the Organization for Security and Cooperation in Europe’s working definition of CVE as proactive efforts to: 1) counter efforts by violent extremists to radicalise, recruit and mobilise followers to engage in violent acts; and 2) address specific factors that facilitate and enable violent extremist recruitment and radicalisation to violence (OSCE, 2018).
As is evident from the amplification of violent propaganda after the shooting, hate speech is platform-agnostic, sometimes bursting from the darkest corners of the internet to the most open public squares too quickly for any one company to intervene.
With the rise of the Islamic State and its global propaganda campaigns in recent years, both state and non-state actors collaborated on initiatives to eradicate violent extremist propaganda, including the 2017 Global Internet Forum to Counter Terrorism, as well as partnerships with civil society such as Jigsaw’s Redirect Method. Technology companies focused primarily on Islamic extremist actors, which, coupled with a multinational military operation in Syria and Iraq, leaves many prior digital CVE initiatives’ effectiveness unmeasured and their hypotheses unproven.
Deplatforming and a host of technical tools featured in the content moderation space may have the ability to help us understand toxic hate speech at a macro level, but can they help us proactively make our cities and towns safer on a day-to-day basis? To what extent do digital tools like Natural Language Processing (NLP), Machine Learning (ML) and Artificial Intelligence (AI) play a role in a qualitative research battle where context reigns supreme over raw data?
Moving forward, it is important to recognise the dynamic and platform- agnostic behaviours of the agents of disinformation and how they evade content moderation. We must treat this asymmetric power dynamic no differently than traditional pre-digital ecosystems, applying the same awareness of potential manipulation to social media and the networked actors. It is with that in mind that we present the following recommendations, urgently calling for the harmonisation of protocols across social media platforms, cloud service providers, and e-commerce providers.
GDI has examined the current legislation approaches of a dozen countries to address the problem of disinformation. Our study provides an overview and captures the gaps in the approaches of these governments that need to be addressed.
The Global Disinformation Index (GDI) and Institute for Strategic Dialogue (ISD) have published a new study which shows how 17 known German far right groups and actors allegedly use 20 different online funding services to fund their activities.
The Global Disinformation Index (GDI) and the Institute for Strategic Dialogue (ISD) have analysed the digital footprints of 73 US-based hate groups, assessing the extent to which they used 54 online funding mechanisms. This research aims to map out the online infrastructure behind hate groups’ financing and fundraising in order to support efforts to defund and therefore disempower hate movements in the U.S. This research was graciously funded by the Knight Foundation.