The following article draws from my recent book, The Life and Death of Freedom of Expression (UTP, 2024). The book began as a second edition of a book that I published in 2000. However, as I began addressing the free speech implications of the internet and social media, I was persuaded that I should view it as a different book.
The arguments in the earlier book about the social character of freedom of expression remain the same (these arguments were subsequently taken up by other writers), but their application in our dramatically changed communication environment is different in several important ways.
The Relationship of Communication
My thinking about freedom of expression begins with a simple observation: Freedom of expression does not simply protect individual liberty from state interference. Rather, it protects the individual’s freedom to communicate with others – to speak to others, and to hear what others have to say. The right of the individual is to participate in an activity that is deeply social in character, that involves socially created languages and the use of collective resources such as the streets and the Internet.
A commitment to freedom of expression means that an individual must be free to speak to others, and to hear what others want to say, without interference from the state.
Freedom of expression is valuable because human agency and identity emerge in discourse – in the joint activity of creating meaning. In expressing ourselves – in communicating – we give shape to our ideas and feelings. We bring our ideas and feelings to “fuller and clearer consciousness”, when we articulate them and put them before ourselves and others. We also understand them in light of the reactions of others. At the same time, the views of the listener are reshaped in the process of understanding and reacting to the speaker’s words – and locating this expression within their existing frames of thought.
While the social character of human agency is seldom mentioned in the traditional accounts of the value of freedom of expression, it lies at the foundation of each. Each of the traditional accounts of the value of freedom of expression (democracy-, truth-, and self-realization-based accounts) represents a particular perspective on, or dimension of, the constitution of human agency in community life.
Recognition that individual agency and identity emerge in communicative interaction is crucial to understanding not only the value of expression but also its potential for harm. Our dependence on expression – that our ideas and feelings take shape when given linguistic form — means that words can sometimes be harmful. Expression can threaten, it can harass, and it can undermine self-esteem. Expression can also be deceptive or manipulative. At issue in many of the debates about free speech protection is whether a particular form of expression engages the audience and encourages independent judgment or whether it instead intimidates, harasses, or manipulates the audience.
The Premises of Free Speech
A commitment to freedom of expression means that an individual must be free to speak to others, and to hear what others want to say, without interference from the state. It is said that the answer to bad or erroneous speech is not censorship, but rather more and better speech.
Importantly the listener, and not the speaker, is seen as responsible (as an independent agent) for any actions she/he takes in response to what she/he hears, including harmful actions, whether these actions occur because he/she agrees or disagrees with the speaker’s message.
Search engine and social media algorithms are designed to keep users on their platforms and exposed to the platform’s ads.
In other words, respect for the autonomy of the individual, as either speaker or listener, means that speech is not ordinarily regarded as a cause of harmful action. A speaker does not cause harm simply because he/she persuades the audience of a particular view, and the audience acts on that view in a harmful way.
Underlying the commitment to freedom of expression (and the refusal to treat speech as a cause) is a belief that humans are substantially rational beings capable of evaluating factual and other claims and an assumption that public discourse is open to a wide range of competing views that may be assessed by the audience.
The claim that bad speech should not be censored, but instead answered by better speech, depends on both of these assumptions — the reasonableness of human judgment and the availability of competing perspectives.
Disinformation may not have been a significant problem in a world in which the media sought to filter out false claims.
A third, but less obvious, assumption underpinning the protection of freedom of expression is that the state has the effective power to either prevent or punish harmful action by the audience. Individuals will sometimes make poor judgments. The community’s willingness to bear the risk of such errors in judgment may depend on the state’s ability to prevent the harmful actions of audience members or at least to hold audience members to account for their actions.
Freedom of expression doctrine has always permitted the restriction of speech that occurs in a form and/or context that discourages independent judgment by the audience or that impedes the audience’s ability to assess the claims made, and the implications of acting on these claims.
Speech may be treated as a cause of audience action when the time and space for independent judgment are compressed or when emotions are running so high that audience members are unable or unlikely to stop and reflect on the claims being made. While the line between conscious appeal or reasoned argument, on the one hand, and on the other, manipulation or incitement, may not be easy to draw (and indeed is a relative matter), it is at least possible to identify some of the circumstances or conditions in which independent judgment is significantly constrained.
The Changing Communication Landscape
What happens, though, when the assumptions underlying the commitment to freedom of expression — about the reasonableness of discourse and the scope of communicative engagement — are eroded or undermined by more systemic changes in public discourse – and not just in isolated situations?

In the last part of the twentieth century, two developments in the character and structure of public discourse raised significant challenges for freedom of expression doctrine.
- The first was the rise of lifestyle/commercial advertising, a form of speech that was designed to influence its audience non-cognitively by associating a product with a value or lifestyle. Lifestyle ads make no explicit claims and are generally presented in a context that limits the viewer’s ability to reflect upon their images or associations. Lifestyle, or image-based, advertising over time became the model for other forms of communication, including political speech.
- The second development was the domination of public discourse by a small group of speakers and a limited range of perspectives, resulting from the concentration of media ownership and the high cost of access to the media.
The emergence of the internet, as an important conduit for personal conversation and public discussion, seemed to lessen concerns about media filtering and unequal access to communicative resources. The internet opened public conversation to more voices. It became possible for individuals to bypass the filters of traditional media.
If manipulation in advertising was a concern before the arrival of social media, it is a much greater problem now.
While the internet provides access to a wide range of speakers and viewers, the sheer volume of material that is posted online, without filtering, means that internet users, as a practical matter, are only able to view a tiny portion of what is available. As a consequence, users tend to expose themselves to a relatively narrow range of opinions that reinforce the views they already hold.
Selective access occurs by choice but also by design. The habit of going to sources that confirm one’s existing views (confirmation bias) is reinforced by the algorithms used by search engines such as Google and platforms such as YouTube and Facebook that direct individuals to sites or posts that are similar to those they have visited in the past.
Search engine and social media algorithms are designed to keep users on their platforms and exposed to the platform’s ads. The attention of users is captured by stories that confirm their existing views or play to their biases but also by stories that are sensational in character.
On platforms, such as Facebook, individuals share stories with close friends, social acquaintances, and political allies – a broad group of ‘friends’, who are generally like-minded. When competing positions are formed around particular social groups (linked to ethnicity, religion, class, location), debate between groups ceases to be about persuading others or understanding their views and becomes instead a declaration of group identity or allegiance.
An individual’s beliefs, even ‘beliefs’ about factual matters, then are often based not on judgment or reason but instead on group membership. This means that even if social media users are not entirely insulated from opposing views, they may be unwilling or unable to engage with those views in a serious way.
This divide is often reinforced by interested corporate and political actors.
The online news/opinion sites that many rely on (particularly on the political right) often provide users with “partisan confirming” (dis)information and opinion, while also encouraging them to distrust other sources, as ‘false news’. And so, even when group members are exposed to the positions and claims of ‘the other side’, they may simply discount these positions.
A growing number of individuals reject traditional authorities and distrust “experts” and “mainstream” media. There is little common ground in the community on factual matters or the reliability of different sources of information, which has made it difficult to discuss issues and to agree or compromise on public policy.
The breakdown of agreement about sources of information or expertise is a reminder that traditional accounts of free speech often focus on the individual’s direct and personal judgments about ideas and facts, while ignoring her/his judgments about sources and expertise – about whom or what to trust or rely on. In the absence of any agreement about which sources to trust, public discussion of issues, such as global warming or vaccine safety, becomes impossible.
Online Harm and the Limits of Law
The shift to social media as the principal platform for public engagement has added to the ways in which speech can be harmful, while at the same time undermining the effectiveness of traditional legal responses to harmful speech. Forms of speech that in the past may not have been regarded as sufficiently harmful to justify their legal restriction, have become more harmful or dangerous in the online world.

Disinformation may not have been a significant problem in a world in which the media sought to filter out false claims. Until recently, the legal prohibition of disinformation then was limited to particular types of deceit or falsehood, such as false advertising and defamation.
In the online world, however, false or misleading claims spread quickly and widely to individuals who are often not in a position to assess their reliability or the trustworthiness of their source. As a consequence, disinformation has become a much larger and more serious problem for public discourse.
If manipulation in advertising was a concern before the arrival of social media, it is a much greater problem now. The use of data collected by internet platforms and search engines from their users has enabled advertisers, both commercial and political, to target their ads to narrower and narrower groups. Not only do these micro-targeted ads play more effectively to the audience’s biases and fears they are often hidden from general view and so escape public scrutiny. The use of micro-targeted advertising during election campaigns, in particular, has raised concerns about the integrity of the election process.
Legal prohibitions on insult or harassment have generally been confined to very particular contexts such as the workplace, in which the targeted individuals cannot easily avoid direct and personal exposure to denigrating comments. However, in the online world, speech that is insulting or denigrating, although not occurring face-to-face or in a closed environment, can be repetitive, difficult to avoid, widespread, and enduring.
Uncivil communication has, in the past, been tolerated as a necessary cost to the protection of free speech, enabling individuals to express strong emotions or to challenge the conventions of public discussion. However, harassing speech has now become so commonplace, so nasty in character, and so difficult to avoid, that it threatens to undermine public discourse by intimidating users into silence or driving them from social media platforms.
Hate speech now spreads widely and rapidly through ever larger networks of friends or allies.
Even if we think a particular form of speech is harmful and ought to be regulated, the traditional legal responses seem to be inadequate to the task. Criminal prosecution and civil action are simply too slow and cumbersome to address harmful online speech that is often posted anonymously, and that circulates quickly and widely.
Lacking the capacity to monitor the overwhelming volume of online material, the state has begun to shift responsibility to the platforms themselves, relying, to some extent, on their expertise and infrastructure to filter out unlawful material. The regulation of online content can take several forms, including
Because the larger social media platforms must rely on automated systems for reviewing material (given the large volume of material posted daily on their platforms and the speed at which this material can spread), their “decisions” are bound to be imperfect, sometimes catching material that is not unlawful, and other times missing material that is. The object then of co-regulation is simply to manage systemic risks and ensure that “due care” is exercised in the creation and application of these processes.
The Future of Free Speech
What future does the right to free expression have in this changing communication landscape?
A reliance on ‘more speech’ as the answer to bad speech – to false or deceptive claims – seems inadequate in a communication environment that is increasingly fragmented, in which a significant element of the population is not only receptive to ‘false news’ and conspiracy theories but is also hostile to competing opinions and evidence that contradicts their views, and in which privately owned platforms employ algorithms that give prominence to some posts and perspectives and downplay others.
To view what is going on as simply a crisis in free speech, as a failure to protect free and open discussion, or as excessive or unjustified (state) censorship, is to misunderstand the serious problem before us and to offer solutions that may be counterproductive. The main threat to public discourse is no longer censorship (and state censorship in particular), at least as this is understood in the traditional free speech model but is instead the barrage of (targeted) disinformation that is undermining our ability to make judgements about truth and right and our willingness to engage with those who hold different views.
There are some imaginable legislative responses to this crisis, but they are far from perfect and even then, it is difficult to be optimistic about our willingness to implement them. Yet our survival as a democratic political community depends on our ability to address these issues.
We do not need to reach agreement on all important public issues, but we do need to be able to converse with one another on these matters, in a way that recognizes that we are all participants in a common political project.