The Government’s Disturbing Rationale for Banning TikTok

Vague national-security concerns don’t justify shutting down the popular Chinese-owned app.

The Government’s Disturbing Rationale for Banning TikTok

Last week, a federal court upheld the extraordinary use of government power against TikTok, the social-media platform that an estimated 170 million Americans use to dance, sing, talk about politics, and engage in a lot of other First Amendment–protected expression. A three-judge panel of the U.S. Court of Appeals for the D.C. Circuit unanimously dismissed TikTok’s challenge to a law requiring that the app—currently a subsidiary of the Chinese tech company ByteDance—be sold to new owners by January 19 or be shut down in the United States.

This is a stunning holding in a country proud of its free-speech tradition. Shutting down a whole speech forum is generally associated with repressive countries such as Russia and China. And the decision is all the more remarkable because the court acknowledged that the law was motivated by concerns about what Americans might be convinced to believe by using the app. Although the government had also argued that the law, passed in April, was justified by data-security concerns, the court strongly suggested that concerns that the Chinese government would leverage its power over ByteDance to covertly manipulate content on TikTok to promote Beijing’s interests would on their own be enough to justify the law. In other words, in the land of the First Amendment, the judges showed a surprising amount of deference to the idea that an entire platform can legally be shut down to keep people from holding views the government doesn’t like.

The Department of Justice had argued to the court that potential Chinese influence over which posts are suppressed or amplified made TikTok a national-security threat. The appeals-court opinion by Judge Douglas Ginsburg explicitly references the government’s concern that China could manipulate content on TikTok related to Taiwan, for example. But why should we think this such an extraordinary threat that it justifies this extraordinary measure? Are we supposed to believe that such manipulation would be so effective as to force America to sit on its hands if China invades Taiwan? Ginsburg does not quite say.

[From the November 2021 issue: The largest autocracy on Earth]

In a concurring opinion, Chief Judge Sri Srinivasan described the potential threat more concretely: He mused about officials of the People’s Republic of China “covertly compelling ByteDance to flood the feeds of American users with pro-China propaganda,” “sowing discord in the United States by promoting videos—perhaps even primarily truthful ones—about a hot-button issue having nothing to do with China,” or even using ostensibly anti-Chinese TikTok content to “conjure a justification for actions China would like to take against the United States.” Far from showing a grave national-security risk, these examples illustrate just how amorphous the concern is. If China attacks the United States, it will not be because someone said mean things about it on a social-media app supposedly controlled by Beijing.

These statements, which presume that Beijing could easily reshape users’ political beliefs via an app known largely for its dance videos, reflect a highly unflattering view of the American public. That view is also completely inconsistent with the First Amendment, which typically allows people to think for themselves. If the government thinks people have wrong beliefs—about Taiwan, China, or anything else—then the Constitution requires that the government change their minds, not censor speech it doesn’t like, unless it can meet the heavy burden of proving that it has no other way of preventing an inevitable, direct, and immediate national-security harm.

Instead of demanding such proof, the court accepted the TikTok law as part of a “well-established practice of placing restrictions on foreign ownership or control” of mass-communications technologies. But most of the precedents cited by the court involve broadcast television and radio, which rely on common airwaves that the government has long regulated closely—to the point of requiring stations to obtain licenses even to operate. Therefore, the Supreme Court has held, broadcasters can even be subject to limited content-based standards that, for example, prohibit obscenity and require opposing political candidates to be given equal time. No such regime governs cable channels, news websites, or social-media apps. The new law doesn’t establish a careful regulatory framework for the mobile-app industry; it singles out TikTok by name and demands the app’s sale or closure. It is, to be clear, a de facto ban. TikTok has credibly argued that a sale would not be commercially or technically feasible—or even legal in ByteDance’s home country, given that China would use export controls to prevent the transfer of the app’s recommendation engine.

[Read: Facebook is a Doomsday Machine]

The appeals court’s unquestioning deference to the government is all the more galling given evidence that Congress targeted TikTok in part because officials do not like what people say on it. Some lawmakers reportedly supported this year’s law because they thought content on the platform was too pro-Palestinian. Donald Trump’s first-term attempt to ban TikTok, which the court relied on as evidence of the “extensive, bipartisan” concerns about the app, may have been motivated because teens used it to prank him. (Trump subsequently dropped his support for banning TikTok—but may be changing his mind again after the court’s decision.)

To be clear, covert manipulation of the online public sphere is a real problem. It’s also the status quo: Social-media platforms wield enormous power over what people are allowed to say and read online, and they exercise that power in opaque ways. Platforms publish content-moderation rulebooks, but there’s no way to know if they are enforced consistently. And the use of algorithmic feeds means that the choices that companies routinely make about which posts to promote and demote are almost completely unobservable from the outside. Some changes—such as when Elon Musk changed X’s algorithm to boost his own posts—might be so ham-fisted that people can infer how a platform is distorting debate, but subtler tweaks are harder to see.

The threat of covert manipulation does not come only from the platforms themselves. Governments around the world eye platforms’ power jealously and often try to leverage it to their own ends. Asking—with varying degrees of politeness—for social-media platforms to remove content is a common habit of governments from around the world, including India, Europe, Turkey, and American administrations on both sides of the aisle. In many cases, platforms are eager to accommodate. Some set up “trusted flagger” programs for officials to alert them to content they think should be taken down, and have employed executives, including former political operatives, who guide content moderation in ways favorable to ruling parties and politicians and represent those governments’ interests in companies’ internal policy discussions. Days after Mark Zuckerberg dined with President-Elect Donald Trump at Mar-a-Lago late last month, one of the Facebook founder’s top executives told reporters that the company previously “overdid it a bit” in moderating content and suggested that it would change its approach. And who knows what impact Musk’s business interests in countries such as China and Brazil, or the Saudi ownership stake in X, are having on his content-moderation choices. The black box of content moderation obscures all of these inputs into platform decision making.

Just last term, however, the Supreme Court sounded a very blasé note about these threats. In one case, it held that platforms’ covert manipulation—sorry, content moderation—of the speech on their sites was protected by the First Amendment. Motivated by concerns that platforms were secretly suppressing conservative viewpoints, Texas and Florida had passed laws to constrain platforms’ content moderation and make it more transparent. But the Supreme Court was clear: Covert manipulation of online spaces was not in itself a harm that the First Amendment allowed U.S. governments to regulate to prevent. “Texas does not like the way those platforms are selecting and moderating content, and wants them to create a different expressive product, communicating different values and priorities. But under the First Amendment, that is a preference Texas may not impose,” Justice Elena Kagan wrote for the majority. This argument rested on the principle, fundamental to First Amendment law, that no matter how much damage private companies appear to be doing to the marketplace of ideas, governmental intervention would be worse.

[Read: America lost the plot with TikTok]

In a second case, the Supreme Court concluded that even when evidence suggested that platforms had changed their approach to content moderation after being pressured by the federal government, plaintiffs seeking redress under the First Amendment would have to show that a specific “discrete instance of content moderation”—for example, taking down a specific post, not just changing general policies—was done directly because of government coercion. The upshot of these cases from last year is that the Supreme Court seems to accept a very large amount of covert manipulation of the modern public sphere—including by governments—as merely part of the modern marketplace of ideas.

Courts should be very wary of letting amorphous claims of “national security” interests by the government become a loophole in the First Amendment. In a globalized and interconnected world, we need a structural answer to what are likely to be recurring problems caused by foreign-ownership stakes in platforms, other forms of foreign influence, and indeed covert manipulation in general. Industry-wide regulation would create fewer fears that the government is simply picking and choosing which platforms to favor and which to restrict.

There are many pathologies of the modern public sphere, and covert manipulation, including by governments, of what we say and read online is definitely one of them. But to water down First Amendment constraints on the American government because of vaguely articulated fears of future Chinese influence would be to endanger what is supposed to make America different, and stronger, in the first place.