In April 2022, Elon Musk, CEO of the electric vehicle maker Tesla and often described as the world’s richest man, announced he would acquire Twitter “to protect free speech,” announcing this plan. But what exactly does “to protect free speech” mean here? It means creating a situation on Twitter with as few restrictions as possible—in other words, eliminating worries that users’ statements will be deleted out of our sight, that others will not be shown their content, or that the order of display will be intentionally manipulated—so that free speech is guaranteed and promoted. Moreover, not only on Twitter but also at Big Tech companies that run other social networking services (※1) and internet services, similar information control is taking place, with the order in which posts and sites are shown, or whether they are shown at all, being decided without our knowledge.
Most recently, following Russia’s invasion of Ukraine, attention has focused on the concealment of truths about the invasion and information control within Russia. People in Russia are said to be unable to learn the truth, but how much truth are those outside Russia actually able to know? Are Big Tech companies—based in Western countries that proclaim freedom of speech and managing and operating a large share of the world’s information—limiting online freedom of expression and engaging in information control out of our view? In fact, many incidents have occurred that suggest just that. In this article, we will take a closer look at this issue.

Image of Donald Trump’s suspended Twitter account (Photo: Marco Verch Professional Photographer/ Flickr [CC BY 2.0])
目次
What is corporate “information control” online?
The act of restricting others from posting or viewing content online is broadly called online censorship. Censorship is defined as “the suppression of free exchange of ideas and information considered unacceptable or threatening by those in power,” and is most commonly carried out by governments. However, governments are not the only ones engaging in such behavior. Companies that manage and operate information online do the same.
In practice, operators of social media such as Twitter, Facebook, YouTube, and TikTok, as well as search engines like Google, restrict posting and viewing of information for various reasons. These actions are taken for a variety of purposes, such as content deemed harmful like hate speech or incitement to violence, content considered false or misleading, or content disadvantageous to the company itself or its home government. The problem here is that the standards for information control are left to the discretion of the companies. While protecting users from harmful content, misinformation, and disinformation can be a noble cause, companies decide what is dangerous and what is true, and their judgments do not necessarily match those of other companies or individuals. Put more bluntly, what is convenient for the company may be treated as “true,” while what is inconvenient may be deemed dangerous or untrue. Moreover, if the views or policies of powerful entities—such as the U.S. government or local governments closely connected with each company—do not align with or run counter to certain content, that content may be deemed dangerous or false and be subject to information control.
In this article, we will delve into information control by companies that manage information online. We will first introduce, with concrete examples, three representative methods of online censorship: (1) labeling, which attaches warnings that question the credibility of information being disseminated; (2) downranking, which reduces the number of times posts are displayed so they are less likely to be seen; and (3) deletion/suspension, which removes accounts or posts entirely. We will then discuss noteworthy cases that do not fit neatly into these three categories.

Google search results related to the invasion of Ukraine, photographed June 9, 2022 (Photo: Takumi Kuriyama)
Labeling
Labeling refers to adding a label and giving other users a warning on posts about specific content or topics when administrators judge the information to be of low credibility or potentially harmful.
For example, starting in May 2020 during the COVID-19 pandemic, Twitter began labeling posts it judged to contain “false content,” such as claims that “vaccines or masks are ineffective,” an example of this practice. It evaluates whether a tweet contains misleading content or unverified claims, and when applicable, links to “trusted external sources” on the topic. In addition, when a tweet contradicts COVID-19 guidance from official bodies such as the World Health Organization (WHO) or public health experts, a warning is attached to the tweet. At the same time, users receiving warnings were subject to penalties (※2) according to the number of warnings.
Tools also exist that attach labels to news-publishing sites, not just to posts on social media, to encourage restricting information. One such example is the credibility rating tool provided by NewsGuard (Newsguard). Based on its own criteria, it assesses whether news sites are credible, and if they fall below certain standards, it displays a red signal within the browser to indicate low credibility. In practice, MintPress, a news site that frequently criticizes U.S. foreign policy, was rated as having low credibility and given a red signal by NewsGuard. MintPress issued a detailed rebuttal, arguing that the evaluation was arbitrary and unsupported.
In recent years, NewsGuard has aimed to have its system installed on devices from major companies such as Microsoft and Google, and ultimately to have it included by default on all internet devices sold in the United States and run automatically, it claims. At this stage, NewsGuard has already partnered with Microsoft, adding the NewsGuard app as a built-in feature of the Microsoft Edge browser. Microsoft is also actively working to introduce NewsGuard’s system to public libraries and schools across the country; it may not be long before the system is applied in all such institutions.

NewsGuard’s website, which evaluates the credibility of news sites by its own judgment — photographed June 9, 2022 (Photo: Takumi Kuriyama)
Downranking
Downranking refers to using an algorithm (※3) to intentionally reduce the number of times certain content—such as posts or websites—appears in other users’ social media or search feeds, thereby preventing the spread of information. In other words, on search engines like Google, it determines which websites appear at the top for a given search term and intentionally displays some content only at lower ranks. On social media, it reduces the number of times certain posts are shown. It is a measure that reduces views to prevent information from spreading widely.
For instance, Facebook has engaged in information control and restrictions on the display of news articles through downranking for media outlets. Originally, the content displayed on Facebook’s News Feed—which functions like a homepage—was shown and adjusted based on a mechanism that surfaces content users might find interesting from their browsing history. However, in 2018 the company changed policy to reduce the display of news articles using its algorithm. Furthermore, in 2021 it decided to cut the display of politically themed posts and began testing this in four countries.
There was also a 2017 incident in which traffic to the websites of multiple media outlets dropped unnaturally. This is attributed to an update carried out by Google in April of that year under the name Project Owl (Project Owl). Google’s explanation was that it was a new search-check tool to stop the spread of fake news on the internet. In reality, however, traffic to left-wing and socialist alternative media (※4)—which are unlikely to be fake news—declined significantly to varying degrees. Specifically, between April and July 2017, traffic to 13 websites fell by About 20–67%, including a roughly 67% drop for the World Socialist Web Site (World Socialist Web Site) (WSWS), about 36% for Democracy Now, and about 30% for Wikileaks.

Democracy Now’s website, which saw a precipitous drop in views after being downranked by Google — photographed June 9, 2022 (Photo: Takumi Kuriyama)
Deletion/suspension
Deletion/suspension refers to temporarily or permanently removing content or forcibly suspending a user’s account. A clear example is Facebook prohibiting and removing posts suggesting COVID-19 might have originated from a lab in Wuhan, China.
Decisions to delete or suspend can also be highly political. For example, just before the 2020 U.S. presidential election, Twitter suspended the New York Post’s account for about two weeks over a story exposing information about Hunter Biden, son of then-leading candidate Joe Biden, an incident in point. The article reported that an email discovered on a discarded computer indicated that Hunter Biden, a board member of Ukrainian energy firm Burisma, had introduced other executives to Joe Biden, who was then vice president. In other words, the story raised suspicions that political and business interests were being intertwined through family connections. Twitter faced accusations of suppressing information unfavorable to a candidate just before the election. Twitter not only suspended the New York Post’s account but also banned all users from tweeting or sharing the story via direct message. Those who attempted to tweet about it were shown warnings, the safety of the link was questioned, and ultimately clicking through to read the article from Twitter was blocked.
There are also cases in which all content from a media organization is deleted wholesale on social media. For example, following Russia’s 2022 invasion of Ukraine, YouTube removed all channels belonging to RT, Russia’s state broadcaster. YouTube stated that the reason for removing the videos was that they were spreading disinformation (※5) about the invasion of Ukraine. However, an enormous number of videos unrelated to the invasion were also deleted without a specific explanation. A journalist whose six years of videos were removed from the channel said the program had never discussed Ukraine—or even Russia at all.

Front page of the New York Post after its Twitter account, once temporarily suspended, was restored — photographed June 9, 2022 (Photo: Takumi Kuriyama)
Other factors that act as walls to the flow of information
We have outlined the main types and methods of online information control, but there are other significant methods and elements worth noting that do not fall neatly into those categories.
First, some media outlets have their means of financing restricted. Many independent media and journalists use PayPal, a type of cashless payment service, to receive subscriptions and donations from readers. However, in May 2022 PayPal temporarily suspended the accounts of multiple media outlets and journalists for “violations of policy,” and even froze the balances in those accounts. The outlets affected commonly expressed, in various ways, dissent from the official positions and stance of the U.S. government regarding Russia’s invasion of Ukraine. Back in December 2010, when WikiLeaks was in the spotlight for exposing classified information from numerous institutions including the U.S. government, the U.S. State Department asked PayPal to cut off donations to WikiLeaks, and PayPal complied, as it had done in the past.
It is hard to dismiss the possibility that staffing at the operating level explains why online services such as social media may appear to align information control with U.S. government views. For example, TikTok, a social media platform now used by 1.2 billion people worldwide and wielding immense influence, employs “new hires” from the Albright Stonebridge Group (ASG) (※6)—a global business firm that has recently become a major talent pipeline for NATO and for key positions in the Biden administration related to national security and foreign affairs—with at least 10 employees coming from ASG. Among them are individuals who previously participated in NATO “psychological operations,” and they hold important positions related to TikTok’s operations and content management.
TikTok is not alone in this. As noted earlier, many U.S. government-connected figures are also inside NewsGuard. Former secretaries and directors of the Department of Homeland Security, CIA, and NSA, as well as people from the Atlantic Council—organizations with deep ties to the U.S. government and NATO—serve as advisors. Moreover, NewsGuard’s investors also include companies with strong ties to the U.S. government, which are listed by name. Facebook likewise includes many U.S. government-linked individuals within its operations, similar to TikTok and NewsGuard.
Measures of information control by social media and search engines, as described thus far, have surged in recent years—and now government-level moves are emerging in the West. For example, the U.S. government has attempted to create a Disinformation Governance Board. According to the U.S. government, the purpose of this body is to protect the United States from the threat of information manipulation. However, numerous criticisms were raised, including the fact that the appointed inaugural chair herself had a history of spreading disinformation, and the organization—seen as a clear information control body—was forced to pause.
If those in power decide what is true and what is false or misleading, it is easy to imagine that information inconvenient to those in power will be labeled false or misleading. Given the history of the U.S. government fabricating numerous falsehoods in matters of war and foreign affairs, entrusting judgments about truth to the U.S. government is extremely dangerous.
Whose “truth”?
We have discussed various problems of information control, but why are control and censorship problems in the first place? Whether correctness is decided by those in power or automatically by Big Tech’s algorithms, information control is being exercised out of our sight. Precisely because we cannot see it, we may not even realize we are being censored. On top of that, there are cases that can be interpreted as information control conducted for the benefit of state power or Big Tech companies themselves.
It is not uncommon for social media companies to make exceptions in line with their own countries’ policy stances. For example, although Facebook generally bans posts that include content inciting violence, in March 2022 it made an exception allowing posts that encourage violence against Russian soldiers invading Ukraine, a case in point. In addition, the Azov Battalion, a unit integrated into the Ukrainian military with Nazi ideology, had been designated a hate group since 2019 and expressions of support for it were banned on Facebook, but in February 2022 Facebook decided to make exceptions if the context was resistance to Russia in the Ukraine invasion.
There are also cases where content judged to be disinformation is deleted or blocked, only for it to later be revealed as true. For example, the incident regarding Hunter Biden’s emails mentioned above was suppressed on Twitter right after publication, but the reporting is now considered true. Similarly, the idea that COVID-19 originated from a lab in Wuhan was initially controlled as disinformation, but now there is a growing view that it is not disinformation but a hypothesis worth investigating as one possibility.
Thus, truths are often blocked arbitrarily by power or companies; even unverified information, if suppressed, can delay the emergence of the actual truth.
So what should we do? Here we can recall the words left nearly a century ago by a justice of the U.S. Supreme Court: as a response to falsehoods, we should provide “more speech, not enforced silence.” In other words, rather than suppressing false or mistaken information, we should provide more correct information. It is also important to enable recipients of information to judge its quality. Improving media literacy and critical thinking in education so that readers and viewers can determine credibility for themselves may lead to progress.
※1 SNS stands for Social Networking Service.
※2 When two or three warnings are issued, the user’s account is suspended for 12 hours; after a fourth warning it is suspended for seven days; and after five warnings the account is permanently suspended.
※3 An algorithm is a system that automatically determines the order in which posts are displayed, whether a post is displayed at all, and whether posting content is restricted or user accounts are deleted.
※4 “Alternative media” refers to media that serve as alternatives to mainstream outlets such as major newspapers and TV networks.
※5 Disinformation is false information intentionally spread to undermine the credibility of a specific state, organization, or individual. A similar term is “misinformation,” which refers to mistaken information spread without intent due to misunderstanding, confusion, or lack of verification.
※6 The Albright Stonebridge Group (Albright Stonebridge Group) was founded by former U.S. Secretary of State Madeleine Albright.
Writer: Yudai Sekiguchi






















SNS以前の時代にも、新聞やテレビなど大きな影響力を持つソーシャルメディアが、大物政治家や大企業、政府との癒着や忖度で、報道内容を恣意的に操作するということはあり得てきたと思います。今回たまたまそのメディアがネット上に移行しただけだと捉えたとして、問題なのはそれをお金さえあれば個人が会社ごと買収できてしまうという点だと思いました。SNSも立派な情報媒体ですが、その影響力の大きさが故に、公平性や透明性を担保するのはかなり難しいのだと改めて考えさせられました。
この記事を読むまで、オンライン化での情報統制が様々な形で行われることを無意識に受け入れた自分に気づきました。
企業にとっては、投稿の削除等は企業活動の死活問題になりうるため、統制に関して私達個人も認識を改める必要があると感じました。