As attention focuses on “disinformation” and the problems it brings, a variety of activities and organizations have sprung up around the world under the banner of “counter-disinformation,” and a new industry is taking shape, particularly in Western countries. Governments, corporations, news outlets, research institutions, and other actors have begun actively engaging in “counter-disinformation.” Examples include research, fact-checking (verification of truth and falsehood) of questionable information, and regulation of information judged to be misinformation or disinformation through measures such as censorship.
At the same time, this “counter-disinformation” industry has many opaque aspects and challenges. This article first sketches the overall landscape of the industry and then explores its problems.

A person using Google Search (Photo: Matheus Bertelli / Pexels [Pexels License])
目次
Overview of the industry
First, let’s look at research within the counter-disinformation industry. There has been a rise in researchers working to uncover the reality of disinformation, and even specialized research institutes and companies focused on disinformation have been launched. Development is also advancing on tools that are said to automatically detect disinformation spreading on social media (SNS) and other platforms.
Next, fact-checking. Alongside censorship, fact-checking is one component of the counter-disinformation industry. This activity primarily involves verifying the truth of information circulating in society and exposing disinformation (debunk: debunk), but it also includes efforts to introduce facts in advance to prevent disinformation surges around predictable events such as elections (prebunk: prebunk). In addition to existing organizations such as governments, universities, think tanks, media outlets, NGOs, and Big Tech companies, new groups dedicated solely to fact-checking have emerged. In recent years, such fact-checking activities have grown rapidly, with the number of organizations conducting fact-checking worldwide doubling from 186 to 391 between 2016 and 2021 (source). Network organizations have also been formed to facilitate exchanges and support among fact-checking organizations worldwide.
Fact-checking is generally conducted manually, but various technologies are being introduced. There are tools that display automated fact-checks in real time for information posted on SNS. Tools have also been developed that verify the authenticity of photos and videos through blockchain technology. There are even private “intelligence agencies” that independently collect openly available information such as satellite imagery (open-source intelligence: OSINT) and use it in fact-checking.
Another major activity in counter-disinformation is censorship. Big Tech companies such as SNS platforms themselves, or at the request of governments and government-affiliated bodies, take measures such as deleting posts deemed disinformation, limiting their visibility, or suspending the accounts that posted them. In some cases, rather than individual posts, all content from a given outlet or information site is removed wholesale from SNS, posting is banned, or visibility is restricted. With respect to visibility limits, the fact that a platform has applied restrictions is often itself hidden (shadowban). Tools have also been developed that assess the credibility of news outlets and websites and display that rating in internet browsers, and there are systems that limit ad revenue for sites deemed to have low credibility. Cybersecurity companies are also involved in developing such tools.

Media and university representatives discussing democracy in the digital age, hosted by the Knight Foundation (Photo: Knight Foundation / Flickr [CC BY-SA 2.0])
Beyond these activities, there are also efforts to help information recipients judge truth for themselves. For example, some organizations provide online teaching materials, awareness videos, and games to improve media and digital literacy.
However, these activities do not necessarily target disinformation alone. A 2017 Council of Europe report introduced and drew attention to the concept of “malinformation,” in addition to misinformation and disinformation (source). “Malinformation” refers to information that is factually accurate but used in ways that cause harmful dissemination to the recipient.
Funding sources for the industry
So what funds these activities? Research on disinformation, fact-checking, and censorship are not easy to sustain as profit-making businesses. As a result, grants and commissioned projects are the main sources of funding in the counter-disinformation sector. Below we look at key organizations that provide funding.
First are government agencies in various countries. Funding is provided through grants to fact-checking groups, research institutions, NGOs, and through commissions or subsidies to firms engaged in cybersecurity. Among them, the U.S. government provides funding to various organizations not only domestically but also in other countries. For example, through the Global Engagement Center (GEC) within the government, as of 2020 the U.S. funded 39 organizations engaged in counter-disinformation activities at home and abroad. The National Endowment for Democracy (NED), which has been linked historically to the CIA, also provides funding to many groups in the United States and elsewhere.
The European Union (EU) also funds counter-disinformation activities conducted by the public and private sectors. In addition to its own activities, the EU supports member-state governments, researchers, fact-checking organizations, media outlets, and others, and has created a framework that enables coordination such as information sharing. The UK’s Foreign, Commonwealth & Development Office also provides funding to organizations engaged in counter-disinformation. More recently, Japan’s Ministry of Foreign Affairs and Ministry of Defense have been moving to explore cooperation with organizations involved in counter-disinformation.

Google headquarters, United States (Photo: Jürgen Plasser / Flickr [CC BY-NC-SA 2.0])
Big Tech companies such as Google, Yahoo, Facebook, Twitter, and YouTube are also actively engaged in activities that can be described as fact-checking and censorship. In-house efforts are generally funded by the companies themselves, but some activities are outsourced. For example, Facebook commissions 90 organizations working in 60 languages to fact-check information posted on its platform. In addition to commissioned work, they also provide grants to the activities of other organizations. For example, Google and YouTube provide grants to fact-checking groups through the International Fact-Checking Network (IFCN) (details).
Some foundations and funds also provide grants for counter-disinformation. Examples include the Bill & Melinda Gates Foundation and the Open Society Foundations.
Many media outlets conduct fact-checking as part of their reporting, but in some cases they also receive support for fact-checking from other organizations and networks. Although fewer in number, there are fact-checking organizations that function as businesses. In the United States, for example, there are groups that earn advertising revenue or sell content to media outlets.
Issues surrounding “fact-checking”
Although fact-checking activities have surged worldwide, problems abound. First is the question of effectiveness. Research has concluded that exposure to fact-checks changes people’s beliefs to some extent (study). Given that disinformation is particularly viral on SNS, it is crucial to deliver accurate information in a timely manner to those who hold mistaken beliefs about a given event. However, in reality, only a limited number of people regularly read or watch fact-checks.
Even if fact-checking is an effective approach, questions remain about the credibility and fairness of its content. In other words, there are operational issues on the fact-checking side: whether topics are selected fairly, whether evidence is adequately presented, and whether sufficient proof is provided that something is misinformation. In fact, it is not uncommon for supposed fact-checks, which should rigorously verify truth, to instead disseminate incorrect information or mislead.

Anthony Fauci, former director of the National Institute of Allergy and Infectious Diseases, at a White House press briefing (Photo: Trump White House Archived / Flickr [Public Domain])
For example, in a fact-check by the Japan Fact-Check Center (JFC), the claim “Bill Gates’ daughter is unvaccinated for COVID and pediatric vaccines” is concluded to be “false,” but the verification relies solely on statements by the person and her parent as evidence, which is a weak basis for concluding that the claim is “false.” Another JFC fact-check on COVID-19 labeled as “inaccurate” the claim that “a Pfizer executive admitted it was untested whether the COVID vaccine prevents infection.” However, as the fact-check itself notes, the infection-prevention effect was untested in the early stages and a Pfizer executive acknowledged that, so the title and other elements of the fact-check are arguably misleading. What the fact-check deems “inaccurate” is that it is not a “newly revealed fact,” calling into question whether it should have been a fact-checking target at all.
In another example, regarding Russia’s claim that NATO’s increased military presence in Eastern Europe in 2022 constituted a threat to Russia, a Canadian government fact-check (Note 1) denied it and asserted that “NATO is a defensive alliance.” In reality, however, NATO has waged war against Serbia, Afghanistan, and Libya, making the “defensive” claim lacking in objectivity, and insufficient as grounds for a fact-check judgment. Moreover, whether NATO’s actions constitute a “threat” is difficult to determine objectively.
Beyond accuracy, topic and target selection also pose problems. Many Western fact-checking organizations focus heavily on Russia-related topics or on COVID-19. For example, the EU’s 2015 project EUvsDisinfo (EUvsDisinfo) was launched to counter Russian information warfare. Of the fact-checks conducted up to 2022 by DisinfoWatch (DisinfoWatch), a project of a Canada-based think tank, two-thirds targeted Russia. Bellingcat (Bellingcat), which conducts OSINT, also shows a bias toward investigating Syria and Russia, which are adversaries of Western countries.

An article on Russian disinformation featured on the EU’s fact-checking site (Photo: Virgil Hawkins)
Topic bias is also apparent in what doesn’t get fact-checked. The U.S. government has disseminated a great deal of disinformation over the years, and media in the U.S. and Japan have often relayed U.S. government claims uncritically. Western fact-checking organizations rarely scrutinize such information. For example, amid U.S. Department of Defense support and cooperation with biological research labs in Ukraine, rumors spread that biological weapons research was being conducted; many organizations, including U.S. outlet PolitiFact and Yahoo News, made this a target of fact-checking and concluded unanimously that it was false. In March 2022, the U.S. government announced that Russia was likely preparing to use chemical weapons, but later admitted that the announcement was part of an information war against Russia and lacked evidence. Yet no fact-checks were observed that exposed or corrected this U.S. government claim made as part of an information war.
Broadly speaking, Western countries’ fact-checking efforts tend to defend or reinforce the positions of their own countries and allies in foreign policy, while exposing adversaries’ information warfare or disinformation. There are also many fact-checks that defend pharmaceutical companies and other large corporations where wealth is concentrated. Furthermore, there are cases where information disseminated by major media outlets is not fact-checked in the first place.
The limits of fact-checking
We have examined in detail the problems with fact-checking activities, but there is a more fundamental issue. Fact-checking aims to clarify whether information is true or false, delivering a verdict to the public. But consider how many phenomena in the world can be so easily judged. It may be possible to verify, based on evidence, whether a particular person actually made a specific statement, or whether a particular incident occurred. However, individual facts are intricately intertwined with other facts, and it is important to understand information and events within a broader context. Moreover, facts can change over time; cases are not rare where information once deemed factual is later shown not to be, and vice versa. In other words, simply extracting and verifying one side’s statement or an isolated incident often does not reveal the truth of the matter. Indeed, the verification itself may even function to obscure the truth.
For example, the Nord Stream undersea natural gas pipeline connecting Russia and Germany was sabotaged in September 2022, but no perpetrator has been proven. Judging by means, motive, and opportunity, some say the U.S. government is the most suspect. President Joe Biden made remarks in a press conference that could be taken as foreshadowing, and days after the sabotage, the Secretary of State stated that the pipeline’s destruction was a “tremendous strategic opportunity.” In such a situation where the truth remains unclear, fact-checking the incident has limitations. Nonetheless, fact-checks by U.S. outlet Snopes and German media DW suggested the possibility of Russian culpability without offering evidence (example) and denied claims that the U.S. was responsible. This can be seen as fact-checking that refuses to even question U.S. involvement and instead denies and shields against that possibility.

Nord Stream pipeline sections being welded before installation on the seabed (2011) (Photo: Bair175 / Wikimedia [CC BY-SA 3.0])
As noted, the later emergence of facts also underscores the limits of fact-checking—for instance, when early fact-checks turn out to be wrong. At the outset of the COVID-19 outbreak, the possibility that the virus leaked from a lab in Wuhan, China was raised, but in the early stages this was thoroughly denied not only by the Chinese government but also by U.S. Department of Health and Human Services (DHHS) entities. A PolitiFact fact-check went so far as to declare the lab-leak theory a “conspiracy theory” that was “inaccurate and ridiculous.” Over time, however, the lab-leak theory has come to be seen as increasingly plausible, and it also emerged that those at DHHS who denied the possibility had conflicts of interest with the Wuhan lab.
Fact-checks deliver judgments on the correctness of target information, but these judgments can themselves shut down debate in the pursuit of truth. In a digitized society, what is needed is an information environment where all possibilities can be freely discussed and examined.
“Censorship-industrial complex”
In counter-disinformation efforts, censorship of information deemed “harmful” is one of the tools used.
GNV has previously covered in detail Big Tech’s information control and suppression of press freedom. However, according to the “Twitter Files” reported sequentially since the end of 2022, the mechanisms and scale of information control between the U.S. government and affiliated groups and SNS companies are becoming even clearer.
The main point is that government agencies (Note 2), in cooperation with multiple private organizations, created blacklists of accounts judged to be suspicious—such as disseminating disinformation—and requested Twitter to delete them or restrict their visibility. It emerged, however, that many accounts on those blacklists not only did not disseminate disinformation, but had no connection to the countries they were associated with. Examples include blacklists of 40,000 accounts linked to India, 5,500 linked to China, and 378 linked to Iran. In each case, numerous accounts unrelated to the target country were identified through reporting on the Twitter Files. This resembles the pattern with the list of 600 accounts alleged to be linked to the Russian government (the Hamilton 68 database). According to Twitter personnel who analyzed these blacklists, merely retweeting information from a Russian outlet could get an account classified as “Russia-linked.”

Journalist Matt Taibbi sharing the Twitter Files (No. 17) (Photo: Virgil Hawkins)
Not only misinformation and disinformation but also opinions and information that contradict government narratives are treated as “malinformation” and subject to censorship. For example, many cases were observed in which the visibility of tweets by healthcare professionals—who presented objective evidence while disputing the CDC’s position on COVID-19—was limited. A Stanford University project working with the U.S. government requested SNS companies to restrict information that might make recipients hesitant to vaccinate. Regarding the Russia-Ukraine war, the U.S. government asked YouTube to censor posts simply for containing “anti-Ukraine narratives,” regardless of their truth. Meanwhile, the U.S. Department of Defense and FBI asked Twitter to support the dissemination of information they themselves fabricated under the banner of “counter-disinformation,” and Twitter complied.
Think tanks and university research labs involved in creating these blacklists alongside government agencies included the Alliance for Securing Democracy (ADF), which created the Hamilton 68 list, and the Atlantic Council’s program closely linked to NATO, the Digital Forensic Research Lab (DFRLab). These organizations received funding from U.S. government bodies such as the GEC and had personnel ties with them. Journalists reporting on the Twitter Files have termed these relationships and activities the “censorship-industrial complex.”
What lies behind these problems
As we have seen, counter-disinformation efforts tend to establish or promote particular narratives. Why has counter-disinformation moved in this direction?
First is the issue of securing funding and catering to sponsors. As noted, activities such as fact-checking and censorship are difficult to sustain as businesses, making operational funding a challenge. Grants and commissions from governments and corporations can partially solve this funding problem. However, to continue receiving grants and fees, there can be incentives to create an information environment that does not deviate from narratives preferred by sponsors.

Bill & Melinda Gates Foundation, United States (Photo: Jack at Wikipedia / Flickr [CC BY-SA 2.0])
For example, Big Tech companies sometimes receive large government contracts that constitute major revenue sources. At the same time, they constantly fear regulation or even being broken up by the government, creating a need to “keep the government happy,” so to speak. For major media outlets as well, governments are important sources of information, making them inclined to side with their own governments.
Deference to sponsors can also extend to large private companies. In the case of COVID-19, for instance, media outlets engaged in fact-checking have strong incentives to side with the pharmaceutical firms that develop and manufacture vaccines, because these firms are major sponsors of the media. The chairman of Reuters, which is active in fact-checking, also sits on Pfizer’s board. The Bill & Melinda Gates Foundation, which has played a major role in countering COVID-19 disinformation, has also invested in several vaccine developers, including Pfizer.
The current information environment itself is also problematic for counter-disinformation. Regarding the Russia-Ukraine war and COVID-19, the information environment in Western countries favors certain actors. In relation to the Russia-Ukraine war, a good-versus-evil narrative—“the U.S. government disseminates accurate information” while “Russia disseminates disinformation”—has taken deep root in the West. Influencers have made numerous posts on SNS supporting state power narratives, and there have been pile-ons attacking opposing views.
Dangers in the industry
“Those in power have reasons to want to counter what they consider ‘disinformation’: because they want to make their truth our truth.” So argues writer Stavroula Pabst. Judging from the way the label “disinformation” is currently applied to particular opponents, narratives, and events, we can see that the very concept is often used as a political tool. The fact that the disinformation label is applied even to facts is dangerous; it could make it easier for governments to remove information that is inconvenient to them.

A student-media-produced podcast about fake news, United States (Photo: WCN 24/7 / Flickr [CC BY-NC-ND 2.0])
Amid many other possible measures, we must also question the drift toward a world centered on fact-checking, information control, and censorship as counter-disinformation. Introducing curricula in educational settings that build media literacy, digital literacy, and especially critical thinking would address more fundamental problems. In other words, the most important countermeasure is not to insert “arbiters of truth” between right and wrong or truth and falsehood, but to promote an environment in which, with freedom of expression ensured, recipients can judge information for themselves. Perhaps the reason such measures have not been fully implemented is that once people acquire critical thinking skills, those skills may be directed at the government as well.
※1 Note: The Canadian government does not refer to its own information releases as fact-checking.
※2 Including the Federal Bureau of Investigation (FBI), National Security Agency (NSA), CIA, the State Department, and the Department of Homeland Security (DHS).
Writer: Virgil Hawkins




















0 Comments