July 13, 2023
Over the past century, the world has focused on defining and securing human rights for all. In 2022, the Human Rights Council affirmed in resolution 49/21 that “disinformation can negatively affect the implementation and realisation of all human rights.” Though disinformation has always existed, the digital revolution has allowed it to spread farther and faster around the globe. Meanwhile, advertising technology has enabled the monetisation of harmful content, and as harmful content is often engaging, this has created an economic incentive to peddle disinformation. To avoid unravelling the progress we have already made and ensure human rights for the generations to come, it’s critical for countries around the world to take the threat of disinformation seriously.
Data shows that disinformation is significantly undermining human rights all over the world by disrupting civic integrity, eroding faith in public institutions and inflaming social hatred. The UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression reiterated the need to build social resilience against disinformation and promote multi-stakeholder approaches that engage civil society as well as states, companies and international organisations.
GDI aims to contribute to building social resilience and protecting information integrity by increasing transparency around monetisation of content, therefore supporting responsible choices to disincentivise the creation and spread of online disinformation.
We view disinformation through the lens of adversarial narrative conflict which exacerbates socio-cultural divisions, fuels anger among individuals and seeks to uproot trust in democratic institutions. Our definition of disinformation is grounded in and informed by internationally recognised human rights standards, including the Universal Declaration of Human Rights, and the International Covenant on Civil and Political Rights.
Examples of adversarial narratives undermining some of our most basic human rights are, unfortunately, not difficult to find. Below are just a few examples of how disinformation impacts civic integrity, trust in institutions and sows hatred.
Democracy and Civic Integrity
The right to free and fair elections has been mandated by the United Nations as a human right since 1976. The UN Human Rights Committee has also asserted that states and their governments are required to ensure that voters are free from interference and can form their own opinions independently. For this to take place, voters must have access to trustworthy and reliable sources of information regarding candidates and where, when and how to vote. The production and circulation of disinformation from both governments and individuals online can, and has, stopped such free and fair elections from taking place.
During Kenya’s 2022 General Election disinformation was used to undermine women’s capacity to make informed political decisions. It also undermined their roles in public institutions, including running for political office. The spread of these narratives on platforms such as TikTok and the open web inflamed political tensions, which stoked fears of election-related violence. This attempt to dissuade women from running for office and participating in the electoral process has disturbing effects on civic integrity and lays the ground to exclude women from the basic right to self-governance. Below is just one example of this phenomenon.
It has also been seen in major atrocities such as the Rwandan genocide and more recently, during the Russian invasion of Ukraine. Such examples of anti-Ukrainian narratives have been extensively tracked by GDI since the outbreak of the conflict, with just one example of this below.
Which human rights are impacted by democracy and civic integrity disinformation?
Prerequisite rights to enable an environment for free and genuine elections:
Trust in Science
Disinformation polarises communities and societies by feeding audiences with divisive and misleading content that obscures, contradicts and undermines scientific, fact-based information critical to public health and safety. Article 25 of the Universal Declaration of Human Rights states that “everyone has the right to a standard of living adequate for the health and well-being of himself and his family, including food, clothing, housing and medical care and necessary social services.” When disinformation destabilises public trust in institutions, these human rights are threatened. During the COVID-19 pandemic, a rise in pseudo or anti-science disinformation undermined informed and accurate decision making around personal and community health.
‘The Great Reset’ is a prolific disinforming narrative that emerged from the pandemic with roots in anti-semitism and conspiratorial claims of a new world order controlled by global elites. Specifically, the Great Reset claims that COVID-19 and/or the vaccine was purposefully introduced as a population control plan implemented to help the global elite consolidate their power. GDI has found examples of monetised articles that promote this theory and weaken trust in scientific fact.
Which human rights are impacted by anti-science disinformation?
Where narrative promotes actions that contradict what is needed to globally bring an end to the pandemic:
Under Article 20 of the International Covenant on Civil and Political Rights, individuals of any nationality, race and or religion are protected from hatred that incites discrimination, hostility or violence. Some of the most dangerous disinformation campaigns emerge from known hate groups or frequent sources of hate speech. When hate speech paves the way for real-world harm against protected groups, international human rights law is violated.
A recent judgement from the European Court of Human Rights emphasised that to exempt a producer - i.e. a person who has taken the initiative of creating an electronic communication service for the exchange of opinions on predefined topics - “from all liability might facilitate or encourage abuse and misuse, including hate speech and calls to violence, but also manipulation, lies and disinformation” (see Sanchez v. France [GC] no. 45581/15, §185, 15 May 2023). The recent ruling concerned the very specific case of an individual, in his capacity as a politician, who was fined for failing to delete Islamophobic comments by third parties from his publicly accessible Facebook “wall” used for his election campaign. The third parties were also convicted.
Which human rights are impacted by hate speech?
Where advocating or inciting hate, discrimination and violence (not exhaustive):
While these examples may seem outrageous, it is exactly this type of disinformation that garners the most attention and clicks. “Content Prioritisation” - the design and algorithmic methods that tech platforms use to promote or downrank content that appears in front of users - goes to the very heart of pluralism, diversity and the access to accurate, reliable information - a key aspect of freedom of expression and the foundation of a democratic society.
The algorithms used by tech companies for search, social and news feeds, are optimised for increasing advertising revenue. Algorithms promote highly engaging, often polarising content to users. The more people use a platform, the more advertising revenue is generated.
However, tech companies have recently shown a clear willingness and interest to better adhere to international human rights law. The foundational principles enshrined in the Guiding Principles on Business and Human Rights suggest that enterprises “should avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved.” As the United Nations has stated, businesses looking to prevent human rights violations “requires taking adequate measures for their prevention, mitigation and, where appropriate, remediation”. There is still ample space for companies to step up and exercise due diligence to comply with human rights law, where human rights refers to internationally recognized human rights
A variety of solutions have been proposed and piloted to confront the disinformation challenge, both from a legal and policy front as well as from a technology perspective. Some of these policy solutions, such as the Digital Services Act (DSA) within the EU, aim to protect users’ rights in online spaces, which are based upon existing international frameworks. Algorithms - unless regulated - often amplify the most polarising content. At its most extreme, adversarial narratives deliberately designed to promote real harm run directly counter to human rights for all.