It is not always easy to discern when hateful content on the internet crosses the line from being offensive to illegal. The line between hate speech and free speech is a thin one, and different countries have different levels of tolerance. The line is even thinner in digital environments where hateful comments posted lawfully in one country can be read in other countries where they may be deemed unlawful.
Hate in a Free Speech Environment
Many argue that the best response to hate speech is not criminalization, but more speech. A classic example of this took place during the 1990s when Canadian Ken McVay, founder of the anti-hate “Nizkor Project”, spent over a decade attempting to engage hate activist and Holocaust denier Ernst Zundel in an online discussion. McVay claimed that the Zundelsite refused “to participate in the interactive forums of the Internet” by avoiding discourse with those who disagreed with its views in favour of spreading hate and recruiting supporters. Today this approach is frequently referred to as “counterspeech,” a strategy explored in more detail in the section on Responses and Solutions. There is evidence, though, that permitting all speech can actually be a barrier to free expression, as people who are targets of harassment begin to censor themselves rather than engaging in counterspeech.[1]
Another issue relating to free speech and the internet is differing standards for online speech from country to country and how this impacts the law enforcement of a global medium. For example, a landmark Canadian Human Rights Commission decision in 2002 ruled that Ernst Zundel must cease and desist publishing hate material on his website. This was an important decision in that it affirmed that the Commission had the right to receive complaints and make decisions about hate material on the internet, as well as in telephone communications. However, because this site was hosted in the United States the ruling could not be enforced.
Free Speech: A Worldview
While free expression is guaranteed by the Canadian Charter of Rights and Freedoms, there is legislation that specifically addresses hate speech (discussed in more detail in the section Online Hate and Canadian Law). The same is true of public opinion in Canada: a recent survey conducted for the Association of Canadian Studies found that a significant majority of Canadians (73 percent) believe that governments should act to limit hate speech online, and sixty percent do not see this as an unreasonable limit on freedom of speech.[2] MediaSmarts’ research has found that Canadian youth are even less concerned about the risk of infringing free speech in the name of reducing hate content: just over a quarter (28%) believe that it’s more important to preserve the right to free speech than to say something about casual prejudice.[3]
Because so many online platforms are based in the United States – and American voices dominate most English-language conversation on online platforms – it’s also important to note that the American approach has traditionally been somewhat different. The United States Constitution has historically valued individual liberty, and the First Amendment addresses the protection of freedom of speech specifically: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” Since laws explicitly prohibiting hate speech are unconstitutional in the United States, hate speech is only illegal if it leads to certain forms of direct physical harm, such as defamation or causing a riot. Regulating hate speech in the United States is a contentious issue since it is often difficult to prove which hateful words incite or lead to hateful actions – especially with the growing influence of the internet.
In its overview of responses to hate speech, UNESCO has suggested that our conception of it should go beyond its legal definition, because “the emphasis on the potential of a speech act to lead to violence and cause harm… can lead to a narrow approach that is limited to law and order.” Instead, we might view it in terms of “the respect of human dignity, and empowering the targets of speech acts to demand respect and to be defended, thereby placing them, rather than the state or another actor, at the centre of effective responses.”[4]
Free Speech in Corporate Spaces
Further complicating this issue is the fact that most speech online does not take place in a genuinely public space, but rather on a platform owned and operated by a corporation – more like a shopping mall than a public square.
Historically, different kinds of communications technologies have been held to different standards when it comes to regulating speech: most people would say it stands to reason that a one-to-one medium such as a telephone company or postal service should not be held responsible for the content that they carry, while one-to-many media such as newspapers and TV stations traditionally have been seen as responsible for the content of what they broadcast. Networked technologies such as websites and social media stand somewhere in between, having many “broadcasters” but also allowing them to reach large audiences. Because of this, the Ethical Journalism Network has suggested that possible hate material not be judged by its content in isolation, but by considering five factors:
- Is the content likely to incite hatred or violence towards others?
- Who is publishing or sharing the content?
- Are they likely to influence their listeners?
- How might their position or past history influence our understanding of the motivations behind what they’re saying?
- Is responding to them going to reduce or increase the spread of what they’re saying?
- How far is the content spreading?
- Is this part of a pattern of behaviour?
- Is the content intended to cause harm to others?
- How does it benefit the speaker or their interests?
- Is the target of the content a vulnerable group?[5]
While many online platforms cultivate a sense of being public spaces, there is a clear sense among youth that the balance between the right to free expression and the need to limit hate is different there: over two-thirds (68%) of American university students feel that social media platforms should take action against hate speech.[6]
As Alice Marwick, a Data & Society advisor and professor at the University of North Carolina, points out, “content moderation by private technology companies is not a [U.S] First Amendment violation; in most cases, it’s just a matter of enforcing pre-existing Terms of Service.” The owners of these platforms, however, have often taken a more absolutist approach to free expression, one which Marwick attributes to a combination of the “hacker ethic,” a fear that taking action on speech will lead to government regulation.[7] While some platforms have more recently taken a more active stance, such as Facebook’s 2019 decision to ban “White nationalist” content as well as outright White supremacy[8], they have traditionally been more reluctant to moderate far-right content than material posted by extremist Muslim groups.[9]
What makes this issue even more challenging is that while harassment and overt statements of hate certainly do harm, hate content that is cloaked in irony, humour, and misinformation can be even more damaging because it is more likely to influence people who are not yet committed to their point of view – especially those, such as young people, who don’t have the background and context to understand why these views are so poisonous.
Free Speech in the Classroom
Like digital media, the classroom is middle ground that makes it impossible to take an absolutist approach to free speech. As Richard Weissbourd, co-director of Making Caring Common and a senior lecturer at the Harvard Graduate School of Education, writes, in the classroom “two fundamental human and democratic rights… collide. One is the right to free speech… but the other is the right to freedom from discrimination. Do we want to elicit diverse views on immigration when somebody might say that immigrant kids might be deported, and there are immigrant kids in the class?”[10]
Kevin Kumashiro, former dean of the School of Education at the University of San Francisco, points out that our approaches to free speech in the classroom are often based on a number of ideas that can lead to some students being harmed or silenced: that it’s necessary for all points of view to be heard in a democracy, that not being able to freely express hateful opinions is a form of injury, and that the goal of classroom dialogue should be for all participants to “put everything on the table.” Just as in other semi-public spaces, though, Kumashiro points out that a teacher who allows hate speech can silence other students who are the targets of that speech.[11]
A better approach, according to Richard Weissbourd, is to “create clear classroom rules and norms around these things… that we should assume other people’s good intentions, that we shouldn’t engage in stereotypes of any kind, that we should challenge ideas and not people.”[12]
As with household rules, these will be more effective if students are involved in making them and understand how they reflect the shared values of the classroom.
Other ways that teachers can work towards balancing free expression and the right to be free from hate in the classroom include:
- discussing, modelling and promoting empathy;
- encouraging students to push back against discriminatory or derogatory speech that doesn’t “cross the line” you’ve established in the classroom (for example, a student questioning the contributions of non-White people to world history)
- proactively talking about how words can be hurtful before an incident occurs.[13]
[1] Lenhart, A., Ybarra, M., Zickhur, K., & Price-Feeney, M. (2016). Online Harassment, Digital Abuse, and Cyberstalking in America (Rep.). New York, NY: Data & Society. doi:https://www.datasociety.net/pubs/oh/Online_Harassment_2016.pdf
[2] Scott, M. (2019, January 27). Most Canadians have seen hate speech on social media: Survey. Montreal Gazette. Retrieved April 24, 2019, from https://montrealgazette.com/news/local-news/hate-speech-targets-muslims
[3] Brisson-Boivin, K (2019). “Pushing Back Against Hate Online.” MediaSmarts. Ottawa.
[4] Gagliardone, I., Gal, D., Alves, T., & Martinez, G. (2015). Countering Online Hate Speech (Rep.). Paris: UNESCO.
[5] Hate Speech: A 5 point test for Journalists - Infographics. (n.d.). Retrieved from https://ethicaljournalismnetwork.org/resources/infographics
[6] Knight Foundation, & Knight Foundation. (2018, March 12). 8 ways college student views on free speech are evolving. Retrieved from https://medium.com/informed-and-engaged/8-ways-college-student-views-on-free-speech-are-evolving-963334babe40
[7] Marwick A. (2017, January 05). Are There Limits to Online Free Speech? Retrieved from https://points.datasociety.net/are-there-limits-to-online-free-speech-14dbb7069aec
[8] Stack, L. (2019, March 27). Facebook Announces New Policy to Ban White Nationalist Content. The New York Times. Retrieved April 24, 2019, from https://www.nytimes.com/2019/03/27/business/facebook-white-nationalist-supremacist.html
[9] Marsi, F. (2019, March 26). How the far right is weaponising irony to spread anti-Muslim hatred. Retrieved from https://www.thenational.ae/world/europe/how-the-far-right-is-weaponising-irony-to-spread-anti-muslim-hatred-1.841430
[10] Challenge Ideas, Not People [Video file]. (2017, February 28). Retrieved April 17, 2019, from https://www.youtube.com/watch?v=tY3MHPXsV7o
[11] How Hate Speech Complicates Our Understanding of Bullying [Webinar]. (2018, July 24). International Bullying Prevention Association.
[12] Challenge Ideas, Not People [Video file]. (2017, February 28). Retrieved April 17, 2019, from https://www.youtube.com/watch?v=tY3MHPXsV7o
[13] Shafer, L. (2018, May 18). Safe Space vs Free Speech? Retrieved April 24, 2019, from https://www.gse.harvard.edu/news/uk/16/05/safe-space-vs-free-speech
Resources for Parents
Resources for Teachers
Diversity in Media Toolbox
The Diversity and Media Toolbox is a comprehensive suite of resources that explores issues relating to stereotyping, bias and hate in mainstream media and on the Internet. The program includes professional development tutorials, lesson plans, interactive student modules and background articles.