One of the defining qualities of the digital age is that nothing exists in a vacuum: both content and users - regularly move between platforms. While different online spaces have their own communities and social norms – or in the case of the largest platforms, like Facebook or Twitter, many self-segregated and overlapping communities – due to this overlapping, those social norms are also influenced by other platforms’. Because a community’s norms are strongly influenced by the most outspoken ten percent, this continuity between networks means that small, powerfully committed communities – such as online hate groups – can have a significant effect on the values of much larger platforms.
Most definitions of hate focus on the ways in which entire groups of people are viewed as the ‘Other’. U.S.-based Tolerance.org says that “prejudices are formed by a complex psychological process that begins with attachment to a close circle of acquaintances, or an ‘in-group’, such as a family. Prejudice is often aimed at ‘out-groups,” groups that are not included in the ‘in-group’ on the basis of certain shared characteristics.
Canadian communications scholar Karim H. Karim points out that the Other is one of a number of human archetypes common to all cultures. When people transfer their fears and hatred to the ‘Other’, the targeted group becomes, in their eyes, less than human. Denying the humanity of victims makes it much easier to justify acts of violence and degradation. Raymond A. Franklin, author of The Hate Directory, an early catalogue of hate websites, defines hate groups as those which “advocate violence against, separation from, defamation of, deception about, or hostility towards Others based on race, religion, ethnicity, gender, or sexual orientation”.
Although it is difficult to determine precise numbers of hate sites on the internet, those that are known to anti-hate organizations are cause for concern. The Simon Wiesenthal Center’s Digital Terrorism and Hate Project tracks hateful websites, blogs, social networking pages and videos on video-sharing sites like YouTube, in order to help identify such threats. The Center’s Rick Eaton explains why networked technology is a boon for hate groups: “25 years ago an organizer would have to stand on street corners handing out literature, cajole acquaintances and others to get them interested, not to mention constant phone calls to keep people interested [to] get them to rallies and so on. Now they can easily post items to blogs and social media, send out mass emails, create discussion forums. In addition to the ease of communication, the internet has for some time provided extremists with a sense of community, that they are not alone in their beliefs.” The result has been an explosion of hate online: “for many years we could track sites in the dozen or even hundreds; now it is impossible to find them all, much less keep track of them.”
What is clear is that Canadians, and young Canadians in particular, do encounter hate speech online: a 2019 survey conducted for the Association for Canadian Studies found that 60 percent of Canadian adults had seen hate speech on social media; MediaSmarts’ study Young Canadians Pushing Back Against Hate Online found that a similar number of Canadian youth had witnessed prejudice online, while just under half had engaged in prejudiced speech online. Young people also comprise the largest group of perpetrators of hate crimes offline in Canada. Given the particular vulnerability of youth, it is vitally important to engage them as early as possible in discussions about hate, and more specifically in discussions about online cultures of hate and hateful content, so that they can recognize it when they encounter it and be empowered to push back against it.
This section explores the ways in which various forms of hate are promoted on the internet and the ways in which young people may be targeted by, or exposed to hate. It examines the line between hate speech and free speech, provides an overview of relevant legislation and voluntary industry codes, examines the contexts in which hate occurs and explores solutions and responses. Included are key articles along with the latest reports and surveys on these issues.
 Xie, J., Sreenivasan, S., Korniss, G., Zhang, W., Lim, C., & Szymanski, B. K. (2011). Social consensus through the influence of committed minorities. Physical Review E, 84(1). doi:10.1103/physreve.84.011130
 Tolerance.org. (2011). Test Yourself for Hidden Bias - Teaching Tolerance. Retrieved April 17, 2019, from http://www.tolerance.org/activity/test-yourself-hidden-bias
 Karim, K. H. (2003). Islamic Peril: Media and Global Violence (Updated ed.). Montréal: Black Rose Books.
 Franklin, R. (2010). The Hate Directory. Retrieved July 14, 2011, from www.hatedirectory.com/hatedir.pdf
 Phillips, R. (2016) “Who is Watching the Hate? Tracking Hate Groups Online and Beyond.” Independent Lens. Retrieved April 17, 2019 from http://www.pbs.org/independentlens/blog/who-is-watching-the-hate-tracking-hate-groups-online-and-beyond/
 Scott, M. (2019, January 27). Most Canadians have seen hate speech on social media: Survey. Montreal Gazette. Retrieved April 24, 2019, from https://montrealgazette.com/news/local-news/hate-speech-targets-muslims
 Brisson-Boivin, K (2019). “Pushing Back Against Hate Online.” MediaSmarts. Ottawa.
 Leber, B. (2015) “Police-reported hate crime in Canada, 2015.” Juristat. Statistics Canada. Retrieved from https://www150.statcan.gc.ca/n1/pub/85-002-x/2017001/article/14832-eng.htm
Diversity in Media Toolbox
The Diversity and Media Toolbox is a comprehensive suite of resources that explores issues relating to stereotyping, bias and hate in mainstream media and on the Internet. The program includes professional development tutorials, lesson plans, interactive student modules and background articles.