By the time John Stuart Mill wrote On Liberty, he estimated that free speech advocates had achieved a clear victory over censors. Whereas previous epochs enabled state officials to restrict the liberty of thought and discussion to protect their interests, Mill’s own era treated their lack of such power as self-evident. However, it was far less clear that there could be any objection to states exercising powers to censor at the behest of their citizens, or to their citizens exercising their own moral powers to impose uniformity of thought and action on their fellows. The first chapter of On Liberty famously argued that these forms of censorship, too, ought to be condemned.
Mill may have won the first argument, at least in the liberal West. States are often prohibited (de facto or de jure) from implementing even democratically authorized forms of censorship. But the battle over nonstate (or private) censorship remains undecided. On one side are those who believe that censorship requires states, and that talk of censorship by private parties is a category error. On the other are those, sympathetic with Mill, who hold that protection is needed not merely against state tyranny, but also against the tyranny of the prevailing opinion.
My book, Private Censorship, uses this controversy as a jumping-off point. It explores stifling social norms, censorious employers, narrative-preserving media organizations, social media content moderation, and a concentrated market for internet search. In each of these contexts, I argue, it makes perfect sense to talk about censorship. Not only do groups, employers, media outlets, social media platforms, and search engines have powers to censor, but in many cases, there is evidence that they exercise these powers.
Understanding Censorship
On my account, censorship is the suppression of expressive content on the grounds that it is perceived to be dangerous, threatening to moral, religious, or political orthodoxy, or threatening to the material interests of the suppressing party. Importantly, I mean the notion of dangerous speech to be broad enough to cover everything from misinformation and disinformation to expression that threatens social order, to inciting or violent expression. Censors achieve the suppression of these (and other) kinds of speech in numerous ways, such as by sanctioning those who produce it or by preventing audiences from viewing it.
While it is possible to define censorship as something only states can do, this strikes me as unmotivated. After all, many of the reasons that state censorship concerns us apply also when private parties suppress speech based on their own idiosyncratic understandings of what speech is fit to be heard.
Speech restrictions of this kind can undermine or distort democratic deliberation, impede speaker and listener autonomy, and raise concerns that information in the public interest is being buried so that the powerful retain their power. If part of the job description of the concept of censorship is to pick out content restrictions that raise these kinds of concerns, it seems ad hoc to build in a requirement that only certain kinds of agents can engage in it. And so, I don’t.
It is also noteworthy that I decline to define censorship in moral terms. Put differently, it is no part of my definition of censorship that those who engage in it must be doing something wrong. To my mind, this makes sense. After all, even states can censor material permissibly based on its content, provided their regulations meet heightened standards (in the U.S., strict scrutiny). Censorship is sometimes an appropriate response to genuinely bad or harmful speech. It makes sense that it is sometimes a permissible response. But when are private parties permitted to engage in censorship?
When Is Private Censorship Permissible?
In the book, I argue that censorship is permissible when private parties target speech that threatens harm greater than the harms caused by censorship, and when the censorship is likely to prevent the harm without causing comparable harms as a side effect. However, even when censorship fails to meet these conditions, private parties will often properly enjoy legal rights to act in ways constitutive of censorship.
For instance, freedom of association means that groups and individuals can decide who to associate with and on what terms. Sometimes, private parties (individuals, social groups, and employers) dissociate from one another in an attempt to deter or discourage speech. Provided they do so for the above reasons (that they view the speech as dangerous or threatening), their behavior will count as censorship. Similarly, persons and private entities exercise their own expressive rights when they name and shame others in response to their perception that those others have engaged in harmful speech.
Perhaps less obviously, editorial independence requires giving editors discretion over what they publish. While editorial independence is crucial for good journalism, the discretion it gives editors and publishers can be abused for censorious ends.
Social media platforms sell the product of content moderation and inevitably decide what kinds of speech they want to host on their platforms. Sometimes these moderation decisions are best understood as efforts to censor dangerous ideas. Likewise, search platforms must make choices about how to rank content, some of which will amount to prioritizing certain kinds of content over others. Such priority rankings can be grounded in concerns about the dangerousness or threatening nature of the content ranked.
Because private parties rightly charged with censorship will often be acting within their rights (even when their censorship is impermissible), we cannot simply apply our well-studied response to state censorship (roughly, prohibition) to cover actions by private parties. Still, the fact that a party has a right to do something does not constitute an argument that they should do it. Rights-holders can act in misguided ways that concern us all. For these reasons, we must articulate context-specific norms for appraising and responding to the way private agents use the discretion their rights afford them.
Impending Reforms
Although I believe this is the right approach, many reforms are premised on rejecting it. Rather than afford social media platforms discretion in setting their content moderation policies and developing principles for assessing the exercise of that discretion, lawsuits from Florida and Texas would subject social media platforms to legal duties of non-discrimination in their content moderation efforts. As the authors of an amicus brief supporting these actions rightly note, this will mean barring:
- TikTok from suppressing criticism of the Chinese government
- Meta from allowing pro-Israel speech while disallowing pro-Palestine speech
- Platforms from removing Holocaust denial, anti-LGBTQ posts, or “great replacement” propagandists
Though only meant to illustrate, it’s worth reflecting on the fact that this is a mixed bag. TikTok shouldn’t suppress criticism of the Chinese government. On the other hand, platforms seem to be acting appropriately when they remove Holocaust denialism from their platforms. If the lawsuits are successful (the Supreme Court has just declined to rule on them), the good cases and the bad cases must stand or fall together.
Without extending to users considerable control over their preferences, forcing platforms to tolerate all of this content will likely make the platforms less enjoyable to use. More than that, such a decision wrongly forces private entities to associate with views that they might have good reason to loathe and arguably forces them to be complicit in the expression of certain ideas (to say nothing of offline harm).
Beyond reforms targeting social media platforms, it is not uncommon to hear proposals to regulate what and how the news media reports (including proposals to revive the arguably counterproductive Fairness Doctrine). With every speech-related firing, there are at least some who yearn to stop employers from sanctioning their employees for things they say. Additionally, proposals to regulate search algorithms (perhaps as public utilities) are gaining significant traction.
While these proposals can sound good in the abstract, I argue in Private Censorship that they are misguided. They are responses to a problem their proponents have been largely correct to identify. But while it can seem that such proposals promote free speech values (this is often their intended purpose), these are merely the superficial characteristics of the policies.
They also restrict First Amendment freedoms—freedoms which are crucial for individuals to get together and pursue their visions of the good life. Without the freedom to form associations and organizations that can exclude on the basis of speech and ideology, and without being able to do so themselves, individuals’ abilities to pursue their conceptions of the good are unduly limited.
Corporations vs. Individuals
One objection to a view like mine is that it affords rights properly belonging to persons to corporations, rather than to artificial entities like corporations. The strongest version of this objection, as I see it, targets public corporations rather than partnerships or private corporations. Publicly traded corporations not only receive numerous benefits from the government that reduce transaction costs and liability but are often owned by hundreds or thousands of people (i.e, shareholders). Here, it can seem especially implausible to talk about “the expressive and associative” interests of the concerned firms.
In the book, I acknowledge this point by distinguishing between firms that are intimate, expressive, both, or neither. Media organizations and search engines are expressive but (often) not intimate. Small businesses are often intimate but (often) not expressive. I argue that the reasons to give firms discretion over the speech of their members are strongest for firms that are both intimate and expressive, and weakest for firms that are neither.
Of course, there are reasons for thinking that firms that are neither expressive nor intimate have legitimate business interests in regulating, for example, what their employees say off the job. An employee who tweets that no one should buy products from her employer is a clear case. On the other hand, an employer that is neither intimate nor expressive oversteps when it fires an employee for merely expressing disagreement with a position her labor union takes on some matter of public policy. (This is arguably the case here.)
Whether it is best to respond to this overstepping by affording employees at such firms greater rights or through social pressure and boycotts depends on the intrinsic merits of the case and the administrative costs associated with affording the right. Minimally, I think we need more evidence to determine whether the benefits of protecting employees from misguided firings and chilled speech significantly outweigh the increased costs of dissociation, regulatory costs, and vetting. Employers are likely to subject prospective employees to increased scrutiny to ensure that they are ‘safe’ and responsible users of social media and this too has costs for our expressive environment.
Benefits of Private Filtering
Suppose you don’t buy these arguments. Is there any reason for welcoming a private sphere that places considerable power over expression in the jurisdiction of the firms that operate there? There may be. To illustrate, let me focus on the U.S. context. The First Amendment (rightly in my view) protects a great deal of speech that is harmful. This includes speech protected both de jure (e.g., hate speech, false and lying speech, misleading speech, etc.) and de facto (e.g., defamatory speech that fails to meet legal standards, dangerous speech that fails to meet the constitutional criteria related to incitement, harassing speech that falls short of relevant legal standards, etc.).
For reasons I argue elsewhere, I think it is generally good that we have high standards for when states can restrict speech. But I make those arguments in full awareness of the nastiness it entails tolerating and the costs of such toleration. As a result of this dual awareness, it strikes me as good that we broadly allow private parties to restrict speech in ways that we stop the government from doing.
This creates communities and spaces that are free from the relevant forms of nastiness while ensuring that people are allowed to speak their minds in the public sphere, on their own property, and in online spaces that they either maintain or that welcome them. The hope is that allowing private entities considerable filtration powers in the private sphere, but leaving information unfiltered in the public sphere, strikes an attractive balance between the need for even ideas thought harmful and offensive to have a chance, and the interests many people have in engaging in conversations free of them in private spaces not subject to state oversight.
To say this much is to endorse private speech restrictions on instrumental grounds, even if you don’t believe that private parties should have the right to enact them. Of course, when organizations and individuals abuse their discretion, it concerns us all, and we ought to make our case that they are not exercising their rights well, potentially boycotting them or otherwise withdrawing our support.
Even if we do not succeed in changing their behavior by such means, merely calling attention to it can encourage us to look elsewhere for information that might be wrongly kept from us. Lest this seem overly optimistic, it is worth noting that acts of censorship in an otherwise open media environment often paradoxically draw attention to censored material. Because of this, it is harder than one might think for private parties to genuinely suppress information in the broader environment—even if it is relatively easy for them to keep their spaces free of it.