Online Hate and Free Speech

The line between hate speech and free speech is a thin one, and different countries, platforms and communities have different levels of tolerance. The line is even thinner in digital environments where hateful comments posted lawfully in one country can be read in other countries where they may be deemed unlawful.

Hate in a free speech environment

Many argue that the best response to hate speech isn’t criminalization, but more speech. This approach, frequently referred to as “counterspeech,” is explored in more detail in the section on Responses and solutions. There is evidence, though, that permitting all speech can actually be a barrier to free expression, as people who are targets of harassment begin to censor themselves rather than engaging in counterspeech.[1] As well, it is only possible to counter speech that you’re aware of – which means that while it may be effective in fully public online spaces, it will have little impact on hate groups’ use of digital platforms to organize, or their use of targeted posts to radicalize potential members.[2]

Free speech: A worldview

While free expression is guaranteed by the Canadian Charter of Rights and Freedoms, there is legislation that specifically addresses hate speech (discussed in more detail in the section Online Hate and Canadian Law). A recent survey conducted for the Association of Canadian Studies found that a significant majority of Canadians (73 percent) agree that governments should act to limit hate speech online, and 60 percent don’t see this as an unreasonable limitation on freedom of speech.[3] Similarly, 70 percent of Canadians who participated in the Canadian Radio-television and Telecommunications Commission’s consultation on the US cable channel Fox News’ supported banning it from Canadian airwaves due to the broadcaster’s history of transphobic and homophobic content.[4] MediaSmarts’ research has found that Canadian youth are even less concerned about the risk of infringing free speech in the name of reducing hate content: just over a quarter (28 percent) believe that it’s more important to preserve the right to free speech than to say something about casual prejudice.[5]

Because so many online platforms are based in the United States – and American voices dominate most English-language conversation on online platforms – it’s also important to note that the American approach has traditionally been somewhat different. While both the Canadian Charter of Rights and the American Bill of Rights have provisions guaranteeing free expression, American lawmakers and courts have generally determined that hate speech is only illegal if it leads to certain forms of direct physical harm, such as defamation or causing a riot.[6]

In its overview of responses to hate speech, UNESCO has suggested that our conception of it should go beyond its legal definition, because “the emphasis on the potential of a speech act to lead to violence and cause harm... can lead to a narrow approach that is limited to law and order.” Instead, we might view it in terms of “the respect of human dignity, and empowering the targets of speech acts to demand respect and to be defended, thereby placing them, rather than the state or another actor, at the centre of effective responses.”[7]

Free speech in corporate spaces

Further complicating this issue is the fact that most speech online doesn’t take place in a genuinely public space, but rather on a platform owned and operated by a corporation – more like a shopping mall than a public square.

Historically, different kinds of communications technologies have been held to different standards when it comes to regulating speech. Most people would say it stands to reason that a one-to-one medium such as a telephone company or postal service shouldn’t be held responsible for the content that they carry, while one-to-many media such as newspapers and TV stations have traditionally been seen as responsible for the content of what they broadcast. Networked technologies such as websites and social media stand somewhere in between, having many “broadcasters” but also allowing them to reach large audiences. Because of this, the Ethical Journalism Network has suggested that possible hate material not be judged by its content in isolation, but by considering five factors:

  1. Is the content likely to incite hatred or violence towards others?
  2. Who is publishing or sharing the content?
    • Are they likely to influence their listeners?
    • How might their position or past history influence our understanding of the motivations behind what they’re saying?
    • Is responding to them going to reduce or increase the spread of what they’re saying?
  3. How far is the content spreading?
    • Is this part of a pattern of behaviour?
  4. Is the content intended to cause harm to others?
    • How does it benefit the speaker or their interests?
  5. Is the target of the content a vulnerable group?[8]

While many online platforms cultivate a sense of being public spaces, there is a clear sense among youth that the balance between the right to free expression and the need to limit hate is different there: over two-thirds (68 percent) of American university students feel that social media platforms should take action against hate speech.[9]

Moderating and deplatforming

Nearly all platforms practice some kind of content moderation and most of those have some provisions to ban or limit hate speech. In practice, this generally means one or more of five categories of actions:

  1. Content regulation: Removing or relocating content or adding a warning or rebuttal
  2. Account regulation: Terminating, suspending or downgrading users’ accounts
  3. Visibility reductions: De-indexing or gating content
  4. Monetary: Preventing content from being monetizable or fining users
  5. Other: User education and other actions that don’t fit in the first five categories[10]

Moderation is by no means a simple problem, but can be best viewed in terms of trade-offs: between centralized systems that give moderators the power to make decisions and distributed models that rely on users to filter their own experiences, for instance, or between transparent systems that are seen as more clear and legitimate and opaque ones that are harder for bad actors to take advantage of.[11] These same bad actors continually work to find ways around moderation tools, such as doodling on banned images or making a composite of multiple images, both of which can change the file’s “hashing” and thus avoid detection by moderation algorithms.[12]

What makes this issue even more challenging is that while harassment and overt statements of hate certainly do harm, hate content that is cloaked in irony, humour and misinformation can be even more damaging because it’s more likely to influence people who are not yet committed to their point of view – especially those, such as young people, who don’t have the background and context to understand why these views are so poisonous. 

Deplatforming refers to banning an entire community or blocking discussion of a particular topic or message. While there is some evidence that this leads those communities to regroup in more lenient spaces – where the lack of moderating voices can lead them to become even more radicalized[13] – the evidence suggests that deplatforming hate organizations reduces the spread and engagement of hateful content overall.[14] Deplatforming a single individual, or a small number of hate “super-spreaders,” can also have a significant effect: the deletion of US President Donald Trump’s Twitter account in January of 2021 led to a 25 percent reduction in toxicity of the tweets sent by users who had followed him.[15]

Moreover, each user’s level of toxicity doesn’t remain constant as they move between platforms and spaces. Instead, research suggests that most users adjust their behaviour to match the norms of each community they participate in.[16] This makes it particularly important to empower young people to take part in shaping those norms.

Free speech in the classroom

Like digital media, the classroom is a middle ground that makes it impossible to take an absolutist approach to free speech. As Richard Weissbourd, co-director of Making Caring Common and a senior lecturer at the Harvard Graduate School of Education, writes, in the classroom “two fundamental human and democratic rights... collide. One is the right to free speech... but the other is the right to freedom from discrimination. Do we want to elicit diverse views on immigration when somebody might say that immigrant kids might be deported, and there are immigrant kids in the class?”[17]

Kevin Kumashiro, former dean of the School of Education at the University of San Francisco, points out that our approaches to free speech in the classroom are often based on a number of ideas that can lead to some students being harmed or silenced: that it’s necessary for all points of view to be heard in a democracy, that not being able to freely express hateful opinions is a form of injury, and that the goal of classroom dialogue should be for all participants to “put everything on the table.” Just as in other semi-public spaces, though, Kumashiro points out that a teacher who allows hate speech can silence other students who are the targets of that speech.[18]

MediaSmarts’ guide Complicated Conversations in the Classroom recommends taking the following steps to foster safe and open classroom discussion:

  • Don’t rush it;
  • Encourage open discussion, but draw the line between classroom discussion and political discourse;
  • Set clear and consistent rules, ideally in collaboration with students;
  • Identify which issues you consider “settled” before you start the discussion. (Settled questions are those that either have been conclusively proven or are accepted by society as settled, such as “Should all people receive equal rights under the law?” Active questions are those that are still being discussed, such ash “How should we resolve the conflicts between the rights of different groups and people?”); and
  • Respond right away to problematic comments, but don’t let them derail the conversation. Ask for clarification, challenge mistaken beliefs and misleading sources, and try to redirect to an active question.

These steps are discussed in greater detail in the guide.

[1] Lenhart, A., Ybarra, M., Zickhur, K., & Price-Feeney, M. (2016). Online Harassment, Digital Abuse, and Cyberstalking in America (Rep.). New York, NY: Data & Society. doi:

[2] Tufekci, Z. (2018) It’s the (Democracy-Poisoning) Golden Age of Free Speech. Wired.

[3] Scott, M. (2019, January 27). Most Canadians have seen hate speech on social media: Survey. Montreal Gazette. Retrieved April 24, 2019, from

[4] Karadeglija, A. (2023) " Ban Fox News from TV, CRTC hears as thousands of Canadians write to regulator.” The National Post.

[5] Brisson-Boivin, K (2019). “Pushing Back Against Hate Online.” MediaSmarts. Ottawa.

[6] McLachlin, B. (2004) Protecting Constitutional Rights: A Comparative View of the United States and Canada. Supreme Court of Canada.

[7] Gagliardone, I., Gal, D., Alves, T., & Martinez, G. (2015). Countering Online Hate Speech (Rep.). Paris: UNESCO.

[8] Hate Speech: A 5 point test for Journalists - Infographics. (n.d.). Retrieved from

[9] Knight Foundation, & Knight Foundation. (2018, March 12). 8 ways college student views on free speech are evolving. Retrieved from

[10] Goldman, E. (2021). Content moderation remedies. Mich. Tech. L. Rev., 28, 1.

[11] Jiang, J. A., Nie, P., Brubaker, J. R., & Fiesler, C. (2023). A trade-off-centered framework of content moderation. ACM Transactions on Computer-Human Interaction, 30(1), 1-34.

[12] McDonald, B. (2022) Extremists are Seeping Back into the Mainstream: Algorithmic Detection and Evasion Tactics on Social Media Platforms. GNET.

[13] Horta Ribeiro, M., Jhaver, S., Zannettou, S., Blackburn, J., Stringhini, G., De Cristofaro, E., & West, R. (2021). Do platform migrations compromise content moderation? evidence from r/the_donald and r/incels. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-24.

[14] Thomas, D. R., & Wahedi, L. A. (2023). Disrupting hate: The effect of deplatforming hate organizations on their online audience. Proceedings of the National Academy of Sciences, 120(24), e2214080120.

[15] Müller, K., & Schwarz, C. (2022). The Effects of Online Content Moderation: Evidence from President Trump's Account Deletion. Available at SSRN 4296306. (Trump’s deletion was reversed in 2023, but as of this writing he has not yet resumed using his account.)

[16] Rajadesingan, A., Resnick, P., & Budak, C. (2020, May). Quick, community-specific learning: How distinctive toxicity norms are maintained in political subreddits. In Proceedings of the International AAAI Conference on Web and Social Media (Vol. 14, pp. 557-568).

[17] Challenge Ideas, Not People [Video file]. (2017, February 28). Retrieved April 17, 2019, from

[18] How Hate Speech Complicates Our Understanding of Bullying [Webinar]. (2018, July 24). International Bullying Prevention Association.