October 21, 2022
Today, the Global Disinformation Index (GDI) is releasing a preview of results from the forthcoming report, Disinformation Risk Assessment: The Online News Market in the United States. GDI’s media market risk assessment methodology was developed to assist advertisers and the ad tech industry in assessing the reputational and brand risk when advertising with online media outlets and to help them avoid financially supporting disinformation online. The findings analyze the systemic risk factors within the U.S. media market and shed light on the riskiest and least risky online news outlets for disinformation in the country.
The full report will be released in the third week of November 2022.
GDI’s research looked at 69 U.S. news sites, selected on the basis of online traffic and social media followers, as well as geographical coverage and racial, ethnic and religious community representation. The index scores sites across 16 indicators – indicators which themselves contain many, many more individual data points – and generates a score for the degree to which a site is at risk of disinforming its readers.
The data from the study corroborates today’s general impression that hyperbolic, emotional, and alarmist language is a feature of the U.S. news media landscape.
Every site displayed some degree of cherry-picking facts, omitting relevant information, making unsubstantiated claims, and/or using logical fallacies. Many of the sites that regularly posted this kind of misleading, biased content also used sensational language to elicit an emotional response from the reader.
Fewer sites had widespread use of what GDI terms “targeting language,” which demeans or belittles people or organizations, rather than simply presenting the news. However, some of these sites specifically cover politics. The data showed that these adversarial narratives appear on both sides of the aisle with similar prevalence. Thirty-eight percent of articles directly targeted Democrats. And another 38 percent of articles targeted Republicans.
Taken together, bias, sensationalism and targeting distract, divide and, as a result, disinform.
Here are the ten sites that showed the least and greatest level of disinformation risk. Read on to find out how GDI determined this, what the data shows about the news media market overall, and what you can learn more about in our forthcoming report.
GDI defines disinformation as “adversarial narratives, which are intentionally misleading; financially or ideologically motivated; and/or, aimed at fostering long-term social, political or economic conflict; and which create a risk of harm by undermining trust in science or targeting at-risk individuals or institutions.”
This definition was developed to transcend many of the semantic arguments and other challenges facing the anti-disinformation space. Most definitions of disinformation emphasize its intentional nature, which cannot be directly measured, and the veracity of specific facts, which becomes extremely difficult to assess at scale. But identifying disinformation requires more nuance than simply evaluating whether an assertion is true or false. Not all false statements are disinformation (think: claiming that NORAD is tracking Santa). Meanwhile, a technically true statement can be presented out of context in a misleading and harmful way.
Based on this approach, GDI has developed a methodology that can quantify the level of disinformation risk on open-web news domains by identifying narratives that are misleading and harmful. The methodology looks at over 80 different signals in combination to gather an overall assessment of disinformation risk for a news website as a whole. The resulting score doesn’t determine that a site or a specific piece of content is or isn’t disinformation; but the summation of all the data collected does give advertisers and algorithms an evidence-based metric for making various decisions. GDI has developed and iterated on this methodology for digital news sources in more than 20 media markets worldwide, with input from our Technical Advisory Group and our research partners around the world.
The review of each U.S. media outlet’s domain was conducted by a team of researchers from the Global Disinformation Lab at the University of Texas, Austin, who were trained to collect data on a set of indicators in two pillars: the Content pillar, based on a sample of content published on the site including news and opinion articles, and the Operations pillar, which reflects the operational policies, practices and past behavior of the media outlet.
The study was designed to categorize each of the 69 sites as either minimum, low, medium, high or maximum risk. These risk ratings were based on where the site’s overall index score fell within the distribution of all the scores in the dataset. That means the risk rating can be interpreted as: the level of disinformation risk relative to the other domains included in the study.
A rating of minimum risk in this study doesn’t mean that disinformation will never appear on a given site; all newsrooms are vulnerable to human error at best and to more nefarious tactics at worst. And conversely, a maximum risk rating doesn’t mean that specific pieces of disinformation have been identified on the site. Rather, the index looks at a wide variety of practices and mitigation strategies to determine the risk level as a whole.
GDI implements several safeguards to ensure that site reviews are fair and that scores are not based on whether or not the research team agrees or disagrees with content on the site. The index does not assess partisanship or the political, religious or ideological orientation of the site. Rather, the index indicators focus on disinformation risk factors. The following steps are implemented to maintain nuance and neutrality:
By and large, there is a clear need for news sites in the U.S. to enhance their operations in order to mitigate the risk of disinforming readers. The Operations pillar of the index assesses the policies and practices in place to mitigate disinformation risk and how well they are followed.
U.S. sites had an average Operations pillar score of 47 out of 100. Performance varied widely, with site scores that ranged from 18.8 and 94.5. However, a full third of the market fell below 40, and only three sites scored above 75 out of 100 on this pillar.
Many sites failed to provide details regarding how they use bylines on their articles, how they ensure accurate and unbiased reporting, and what types of sources they rely on. Anonymous sources have always been a mainstay of journalism, and most reporters adhere to strict ethical and professional standards for corroborating stories while protecting individual sources. But in the internet era, and with the increasing nationalization of U.S. news coverage, newsrooms need processes in place to guard against the dissemination of mis- and disinformation.
The research looked at both preventative mechanisms – policies and practices, as well as structural factors like conflicts of interest that could impact reporting – and the actual content published on the sites. In many cases, weak Operations pillar scores remained simply a risk factor. On the whole, the market performed better on the Content pillar.
To translate GDI’s adversarial narrative approach to a quantitative risk rating, the index assesses a number of factors that can contribute to misleading readers. The average Content pillar score was higher than the average Operations score, but the market’s weaknesses were inconsistent results on indicators of neutral, objective reporting. This demonstrates that a significant portion of the media market has veered away from a focus on presenting the facts. The data in this study corroborates today’s general impression that hyperbolic, emotional, and alarmist language is a feature of the U.S. media landscape, in varying degrees from outlet to outlet.
The index assesses whether sites use a variety of sources to substantiate their articles and how well they attribute the facts their stories rely on. Results were mixed. On the one hand, even the riskiest sites in the sample generally avoided entirely baseless content. However, even the best-performing sites sometimes built stories on a limited number of sources or failed to attribute some of their facts and assertions.
The indicator that looks for clickbait showed a similar trend. That is, most headlines didn’t exaggerate the impact of a story or mislead readers about an article’s contents. However, every site assessed showed some degree of risk here.
This overall finding is further borne out by assessments of the lede, tone, and visual presentation of the content and the neutrality with which the site’s journalists construct their story or argument. The use of a lede to introduce readers to the key facts of a story is less common now that news has moved online. Roughly a third of the sites in the sample did not include a fact-based lede roughly a third of the time.
Every site displayed some degree of bias or sensationalism in their content. Importantly, bias doesn’t mean having an opinion about an issue. Biased writing tells only part of a story; it focuses on or plays up one particular angle or piece of information and prevents readers from getting an accurate take on the events at hand. Biased articles cherry-pick their facts, omit relevant information, make unsubstantiated claims, and/or use logical fallacies. Many of the sites that regularly posted this kind of misleading, biased content also used sensational language to elicit an emotional response from the reader.
On the whole, U.S. sites performed better on metrics of content that targeted at-risk groups and created us-versus-them narratives. This type of language – which demeans or belittles people or organizations, rather than simply presenting the news – distracts, divides and, as a result, disinforms. Most U.S. media outlets avoid this kind of outright adversarial language in their content most of the time.
However, a minority of sites – including outlets that are expressly political – frequently employed language or tone that directly undermines an individual, group or institution in a significant portion of the content reviewed. Additionally, some of these same outlets positioned one group of people as inferior, unworthy, or somehow worse than another, often on the basis of political or ideological perspective, race or ethnicity, gender, or similar characteristics.
Importantly, GDI’s analysis distinguishes “targeting” from criticism or satire. Domains that use satire or that present critiques of public figures are not penalized. Targeting was defined as ridiculing, derogatory or hateful remarks and/or the promotion of unsubstantiated doubts or distrust in a specific actor, and the research team was trained to differentiate between criticism and ad hominem attacks. Across the study as a whole, 65.6% of the articles contained this type of targeting language.
For example, the WHO and CDC, along with local governments, were targeted by articles opposing COVID booster shots or new restrictions to contain monkeypox. Police, Rangers, and Border Patrol Tactical Unit were heavily criticized following the Uvalde mass shooting – before any investigation on the timing of their response was carried out. The Supreme Court was negatively targeted because of the decision to overturn Roe v. Wade. Other governmental institutions and financial aid were depicted as conniving, with politicians (e.g. FBI or FDA) or deemed as “too expensive” (e.g. IRS) on the basis of no substantiating argument.
The data reflected the well-established political polarization in the U.S. Thirty-eight percent of articles directly targeted Democrats. And another 38 percent of articles targeted Republicans.
Articles containing this type of language also targeted minorities and at-risk groups, such as individuals based on their race, ethnicity, or nationality (10%), based on gender or their belonging to the LGBTQ+ community (9%) or based on religion (almost 7% of the articles, cumulatively). The narratives surrounding these communities are particularly dangerous as they have the potential to lead to offline harm such as discrimination or violence.
GDI’s index identified 26 sites that can be considered low- or -minimum risk for disinformation. Here are the top ten.
NPR.org (Risk level: Minimum)
NPR’s online news presented a minimum level of disinformation risk, both based on its neutral, fact-based content and its transparent and complete operational policies and practices. Some small degree of bias and sensationalism was detected in the content sample, which has the potential to mislead readers. But on the whole, the site appears to have sufficient safeguards in place to prevent disinformation from making its way into the newsroom.
APNews.com (Risk level: Minimum)
The news homepage for The Associated Press had the best Content pillar score among assessed sites. The AP can stand to improve in some of the Operations metrics, including transparent and diverse funding. However, readers can rely on the AP for neutral, fair and well-developed reporting.
NYTimes.com (Risk level: Minimum)
The New York Times online was also rated minimum risk, in large part based on a high degree of transparency all around, from who authors the news to who owns the company and how it makes its money. Content from NYTimes.com wasn’t always free of bias, but it generally avoided targeting language and adversarial narratives.
ProPublica.org (Risk level: Minimum)
A nonprofit with an emphasis on investigative journalism, ProPublica was a minimum-risk site in the index based on strong scores across the board. Readers will find in-depth coverage without bias, sensationalism or negative targeting.
Insider.com ( Risk level: Low)
Insider was among highest-scoring sites in the low-risk category. Content on this site was largely free from bias, negative targeting or sensationality, and the articles used journalistic best-practices to familiarize readers with the topic at hand.
USAToday.com (Risk level: Low)
USA Today received a low-risk rating based on strong scores across the board. The site could improve in terms of relying on a wide range of sources and being sure to clearly attribute statistics, quotations and external media. However, the articles reviewed were almost entirely free from divisive or demeaning language. USA Today also ranked amongst the top 20 domains for headline accuracy, suggesting clickbait is relatively rare on the site.
WashingtonPost.com (Risk level: Low)
The Washington Post publishes some of the strongest editorial guidelines among assessed sites. This domain largely avoids sensational or negatively targeted reporting — but its content includes occasional bias, and its funding structure could do more to prevent conflicts of interest.
BuzzFeedNews.com (Risk level: Low)
BuzzFeed News – a separate domain to the popular entertainment site known for its quizzes – demonstrated a strong Content pillar score based on neutrality and journalistic best practices. Statistics, quotations and external media were properly cited and its articles frequently employed objective, fact-based ledes. The site scored relatively well on indicators of neutral, unemotional language, but could stand to tone down its sensational visuals.
WSJ.com ( Risk level: Low)
Readers of The Wall Street Journal can expect neutral reporting free from content that is either sensationalized or demeaning toward specific groups or individuals. Articles on this site featured a degree of bias similar to The New York Times and The Washington Post; which is to say, not absent, but limited.
HuffPost.com ( Risk level: Low)
HuffPost largely featured fact-based, unbiased content free from sensational text or visuals. This domain also refrained from perpetuating divisive narratives via negative targeting of groups or individuals. The outlet’s scores for the Operations indicators were imperfect, but better than most.
Twenty-three sites fell in the high- and -maximum risk categories, indicating that readers and advertisers should approach with caution. The following ten sites pose the greatest risk of disinforming readers.
NYPost.com (Risk level: High)
The New York Post was rated as high-risk, largely because of its lack of transparency around operational policies and practices. The site published no public guidelines for the use of bylines on its content, the types and number of sources its content relies on, or pre-publication fact-checking or post-publication corrections processes. As a result, even if relevant policies exist, they can’t be factored into the site’s risk score. Additionally, content sampled from the Post frequently displayed bias, sensationalism and clickbait, which carries the risk of misleading the site’s readers. Importantly, GDI’s study did not review specific high-profile stories and attempt to determine whether they were disinformation. Rather, the risk score is based on a robust operational framework and a blind review of a sample of articles from across the site.
Reason.com (Risk level: High)
Reason Magazine’s high-risk rating can be attributed to scores of zero in three Operations pillar indicators: the site publishes no information regarding authorship attribution, pre-publication fact-checking or post-publication corrections processes, or policies to prevent disinformation in its comments section.
In terms of its content, Reason Magazine did largely refrain from perpetuating in-group out-group narratives or unfairly targeting certain actors via its reporting, but its articles were often biased in their construction and relied on sensationalized, emotional language.
RealClearPolitics.com (Risk level: High)
RealClearPolitics scored poorly in the Content pillar due to the prevalence of biased and sensational language, which risks misleading and manipulating readers. Their articles often lacked clear and diverse sources, and there was no information regarding byline and sources policies on the site. These factors can make it difficult for readers to double check the basis for questionable arguments or claims. RealClearPolitics scored well in sensational visuals due to the fact that almost all of its articles did not have visual elements (aside from their headline image).
DailyWire.com (Risk level: High)
In addition to bias and sensational language, articles on The Daily Wire featured a high degree of sensational visuals. Combined, such content runs the risk of manipulating readers’ emotional responses and disseminating biased interpretations of events, thus garnering a high-risk rating.
TheBlaze.com ( Risk level: High)
TheBlaze scored as high risk, receiving fairly even Content pillar and Operations pillar scores. This domain’s content showed the third highest degree of bias and second highest prevalence of sensational language among sites in this study. Most articles also failed to use journalistic best practices to familiarize readers with the topic at hand, instead leading with bold claims or emotional appeals.
OANN.com (Risk level: High)
One America News Network (OANN) was also scored as high risk, but demonstrated a substantial difference in its Content pillar and Operations pillar scores. OANN’s low Operations pillar score was largely the result of publishing no information regarding its policies to ensure accuracy (fact-checking, etc.) or attribute authorship, or about its ownership structure, which is a risk-factor for conflicts of interest and/or editorial interference. OANN did moderately well on some of the Content indicators, but was one of only a few sites to fail to include a complete byline on most of the articles sampled.
TheAmericanConservative.com ( Risk level: High)
The American Conservative had one of the lowest scores in the study for bias, indicating that almost all of the content sampled was either somewhat or entirely biased. Importantly, this indicator does not measure whether the author of an article agrees with one or another side of an issue; it assess the construction of the story or argument, looking for elements like unsubstantiated claims, logical fallacies, ad hominem attacks, and obvious omissions of pertinent information. In the case of The American Conservative, these features were widespread, putting readers at risk of being consistently misled.
TheFederalist.com (Risk level: Maximum)
The Federalist performed well in a handful of areas, principally a transparent ownership structure free from conflicts of interest. However, the site fell short in other aspects of the Operations pillar. It also had one of the lowest Content pillar scores in the study, scoring in the 20s for bias and in the 40s for sensational language. Further, the use of language that demeans, belittles or otherwise targets individuals, groups or institutions was frequent. Taken together, articles written in this way – especially when they appear across a news domain – establish misleading and harmful narratives that amount to disinformation.
Newsmax.com (Risk level: Maximum)
Newsmax received one of the lowest Operations pillar scores, putting it in the maximum risk category. The site lacks transparency around its operational practices across the board. The outlet performed much better on Content, but its scores for biased articles and sensationalized visuals fell in the 50s, indicating a significant frequency and degree of misleading arguments and emotional images, videos, and other visual elements.
Spectator.org (Risk level: Maximum)
In content published by The American Spectator, bias, sensationalism, and divisive and targeting language were prevalent, while fact-based ledes and well-measured headlines were rare. On the contrary, most of the assessed articles on this domain negatively targeted a group or individual in their title or opening sentences. Frequent hyperbole and generalizations further supported the establishment of adversarial narratives. The site also provides little transparency around its operations, in particular its policies on sources and attribution and its editorial guidelines.
GDI’s full report, now published, will present the data and findings behind these key themes, including the complete analysis of the negative targeting detected across US media outlets. The report will also include the full methodology behind the research.
Note: a previous version of this article stated, in error, that five of the media outlets in this study did not display any negative targeting in the content sampled. This article was updated on 3 November 2022 to remove that statement and to clarify the report’s finding: while negative targeting was widespread on only a small number of sites, every site in the sample displayed some degree of targeting language.
The full report is now available.
The Global Disinformation Index (GDI) and the Institute for Strategic Dialogue (ISD) have analysed the digital footprints of 73 US-based hate groups, assessing the extent to which they used 54 online funding mechanisms. This research aims to map out the online infrastructure behind hate groups’ financing and fundraising in order to support efforts to defund and therefore disempower hate movements in the U.S. This research was graciously funded by the Knight Foundation.
Overview for brands, advertisers and ad tech companies of the key US 2020 election disinformation narratives, the sites carrying these, and the ads funding them.
Today’s online threat landscape is more complex and blended, with disinformation agents drawing on a large library of content to craft narratives aimed at social, political and economic conflict. It is time for a new disinformation model.