Responses and Solutions in the Classroom

There are two main strategies for addressing online hate and cultures of hatred in the classroom: teaching youth to recognize and deconstruct it, and empowering them to intervene by answering back to it.[1]

Digital Literacy Education

While many schools and households rely on filtering software to protect youth from exposure to hate material, these programs are not a complete solution: hate often comes in subtle forms, such as cloaked sites, that filters do not pick up.[2] Teaching youth to think critically about all of the media they consume is needed in order to prepare them to recognize both overt and cloaked hate, and teaching young people about the techniques hate groups use to make their arguments – and the common elements of their ideologies – can help alert youth to “red flags” that show a source is trying to manipulate them or provide biased information.

Young people need to understand that the internet has no gatekeepers and they must learn to distinguish between biased, prejudicial material, and fair, accurate information. Digital literacy skills development has been called for by many authors[3] as an essential piece in any comprehensive approach to combating online hate. These skills enable young people to critically deconstruct images produced by hateful media, as well as provide an effective way of understanding multiple perspectives, in turn reducing racism, sexism and homophobia. Teachers can also draw on digital media literacy to show students “how the alt-right takes advantage of a 24-hour thirst for headlines and garners mainstream media coverage for memes, conspiracy theories and misinformation campaigns."[4]

Another one of the most crucial critical digital literacy skills for combating hate is to know how to verify sources. [5] Hate groups put considerable effort into making their sites look legitimate by including many of the markers that youth use to determine credibility[6]: a dot-org Web address, quotes and citations from other sources (even if those are distorted, misquoted, made up, or are just quoted from other hate groups), claims of expertise, and an appealing and professional design.

Students also need to learn that the algorithms that generate search results, "trending topics" and recommendations of which video to watch next, do not necessarily weight results by accuracy and reliability. Hate and conspiracy groups often attempt to manipulate these algorithms in various ways to their advantage, and while some platforms have taken steps to improve results, these are usually done in response to a single incident, rather than changing the algorithms to downrank hate content. Young people need to learn not to place too much attention on search engine rankings,  need to examine the URL and snippet of each search result before clicking, and need to actively search for videos rather than watching what's "up next."

As platforms have made it harder to manipulate algorithms, hate groups have turned to exploiting "data voids" – "search terms for which the available relevant data is limited, non-existent, or deeply problematic." To take advantage of these, hate groups "encourage users to search for a topic for which the motivated manipulator knows that only one point of view will be represented."[7]

To deal with these, we should teach students to search based on the general topic rather than specifically for a phrase or term they have encountered. For instance, rather than searching for a specific claim about immigration (which will likely be weighted towards sources making that claim), searching with a more general phrase such as "immigration statistics" will likely produce better results.

It's also important to double-check sources of information before relying on them: doing a search on the cloaked hate site National Policy Institute, for example, leads to a Wikipedia article identifying it as "a white supremacist think tank" that "lobbies for white supremacists and the alt-right." We also need to teach students the facts about topics where hate groups attempt to create doubt, such as the Holocaust or the history of slavery, to ensure that they don't have data voids for hate groups to exploit by telling them to "do the research and make up your own mind."

It's also essential to make young people aware that some people online are trying to manipulate them. We need to be sure students know about the cloaked hate sites that these groups use to spread their message and help them to recognize the techniques they use – an approach which has been found to make people less likely to believe hate-based arguments and to see hate groups as less credible.[8]

For teachers, this means being themselves more aware of these sites. Six have been identified as the ones students most often cite as sources: National Policy Institute, Radix Journal, American Renaissance, Taki’s Magazine and Voat, a forum site similar to Reddit.[9] (All links lead to archived snapshots, not the actual sites.) Teachers can make students aware that these sites are unreliable and use screen captures from them to show both the techniques they use to seem legitimate and the ways in which they give away their messages of hate. As Jennifer Rich, Executive Director of Education for Genocide Watch, puts it, "teachers need to help students learn to recognize credible sources and not fall victim to alt-right sites that put forth propaganda. In order to combat the darkness in the world and on the web, teachers must have the knowledge and courage to teach about it directly."[10]

When discussing hate content, as with any kind of misinformation, it’s important not to amplify the message. One effective way to do this is to give students what’s been called a “truth sandwich”: start with the truth (for example, “the Holocaust happened”), then present the lie (“some people try to say that it didn’t happen, to raise doubts about it”) and then explain why it’s a lie, while repeating the truth (“they lie about the Holocaust not happening because they don’t want people to feel sympathy for the Jews and other people who died in it.”)

As well as cloaked hate sites, teachers should be aware – and ready to discuss – how hate groups spread their message through videos or memes: "Alt-right rhetoric often cloaks its meaning behind pop-culture references and inside jokes. But if teachers learn to recognize these red flags, they can recognize students who are at risk—and step in."[11]

Helping youth recognize the markers of hate content – in particular, messages that "Other" and dehumanize groups – can also make them less likely to be persuaded by it.[12] Studies have found that reflecting on how one's own group can't be viewed monolithically makes people less likely to dehumanize others.[13] Othering and dehumanization play an important role in the radicalization process because it is only by believing that a single, monolithic "they" are against you that politically-motivated violence can be seen as justified. While hate material aimed at the lower levels of the radicalization pyramid often obscure this element, teaching students about it can help them to recognize it even when it is couched in more reasonable-sounding language.

Combatting Hate Online

As well as teaching youth how to recognize hate online, it's important to empower youth to respond to it. This is important because how we respond to hate or prejudice has a powerful effect on perceptions of what is normal and acceptable in society; this effect is stronger online, where the loudest voices are often taken as the majority.[14] As a result, when we don't respond to hate and prejudice online we contribute to the sense that hateful sentiments or prejudiced beliefs are a part of the social norms of the community. "The majority of us are in favour of curbing hate speech but we’re very exposed to it so we don’t know what to do about it," according to Jack Jedwab, president of the Association for Canadian Studies. The result "is to make people more indifferent to these things because it renders them more banal. We become desensitized to it."[15]

MediaSmarts' own research supports this: six in ten Canadian youth say that they would be more likely to push back against hate online if they had seen someone else do it.[16]

It's important not just to encourage youth to respond to hate, but to train them in how to do so safely and effectively: half of Canadian youth say they don't speak out because they don't know what to say or do.[17] Research has found that this can not only make young people more likely to confront prejudice, but that empowered students serve to encourage their friends and peers to act as well.[18]

Experts on responding to hate have identified a few actions that are most likely to help without doing harm:

  1. Be a good witness by recording the incident, in case you or others want to report it.
  2. Be an ally to anyone being targeted by hate speech. As with cyberbullying, letting people know you're on their side can be powerful in reducing the impact of hate.
  3. Speak out without escalating things.[19]

Speaking out against hate – sometimes called counterspeech – is one of the most powerful but also riskiest responses. The Dangerous Speech Project has found that some types of counterspeech are more likely to be effective than others at either changing a person's behaviour, communicating that hate is not part of the social norms of the community, or both:[20]

  • Telling the speaker that what they are saying does harm. In an online environment, it can be hard to remember that actions have consequences, so it can be valuable to remind those engaging in low-level hate or prejudiced speech that it does have an effect, even if they're just "joking around." MediaSmarts' research found that one of the factors that would make youth most likely to intervene when they witness prejudice online is if someone they know tells them they were hurt by it.[21]
  • Deflecting the conversation. This is not a long-term solution, but can be effective in defusing a hate situation that is getting more severe.
  • Appealing to shared identity or shared values. Research has found that people are more likely to respond to counterspeech if they feel that the person they're speaking to shares an important part of their identity.
  • Using humour. Though it's important not to make light of hate speech, absurdist humour can reduce the credibility of a group that wants to be seen as powerful and dangerous, and can make an argument more persuasive to others in the community.

The Dangerous Speech Project notes that these strategies are less likely to change the mind of someone who is already deeply committed to hate, but even then they communicate to less committed people in the audience that hate is not the norm.[22] This is essential because MediaSmarts' research found that youth are more likely to intervene if they are confident that their friends and other users feel the same way they do.[23]

The report also notes some strategies that are unlikely to work well, and can even make things worse:

  • Insults, which can escalate a situation and also make the person hold more strongly to their original position, as well as making other community members less likely to speak up.
  • Debunking. As with other forms of misinformation, it's not enough to just point out that someone is saying something factually wrong: you have to replace it with a more compelling truth. As well, it's important to realize that online hate is often based on conspiracy thinking, in which any counter-argument is simply more evidence that "they" want to suppress the truth.[24] In a public forum, though, you may choose to refute hate-based arguments if only so they don't go unchallenged.  
  • Harassment and threats, which (aside from being morally wrong) are likely to build sympathy for the target.             

Reporting Hate

Another means of fighting hate is to report it to the service or site that hosts the content. Political scientist P.W. Singer has advocated a "public health" approach to online hate, “creating firebreaks to misinformation and spreads of attacks that target their customers” and ‘deplatforming’ proven ‘superspreaders’ of harassment.”[25]

“When you encounter groups that you find to be in violation of platform policy or that are very toxic, it is useful for you to report them,” according to Kat Lo, a researcher on online communities.[26] Evan Balgord, executive director of the Canadian Anti-Hate Network, points out that a user’s report can add important context that an algorithm or a paid moderator might not recognize, such as in-jokes or coded words or phrases.[27]

While it is important to encourage youth to report hate, there are limitations to this approach. Many Internet Service Providers, for instance, will remove hate content that is hosted on their servers once they’re made aware of it, but most are reluctant to remove suspect content from their servers without official direction from a law enforcement agency.

As well, while "cloaked" hate sites are still a significant threat because students stumble upon them while doing research, more hardcore hate content is increasingly found on social networks and video-sharing sites. Most of these forbid hate material under their Terms of Service and have reporting mechanisms for reporting hate content, such as the ability to “report” a page or profile on Facebook or to “flag” a video on YouTube. Because of the huge volume of content on these sites, they rely on users to alert them to hate material. Each of these sites has its own standard for what will be removed: Facebook, for instance, has removed pages that call for violence but has refused to take down pages associated with hate groups or Holocaust denial material.

Youth shouldn't assume, though, that their report made no difference if they don't get a response. Zahra Billoo, executive director of the San Francisco Bay Area branch of the Council on American-Islamic Relations, suggests that “If the platform doesn’t take action, you can make multiple reports. If a single user is reported on multiple times, or if multiple users report a single user, public awareness can move a platform to take action if one report wouldn’t. These are companies that respond to customers.”[28]

Consumer activism can be another effective way of responding to hate online. Nearly every major online platform relies on advertising for revenue, and pressure on advertisers – and the resulting pressure on platforms from advertisers – is credited with, among others, removing the conspiracy theorist Alex Jones from every major platform[29], which significantly reduced his ability to reach mainstream audiences.[30] Youth should be made aware of the power they have as consumers and also taught that if they see hate content on an online platform they can often make a real difference by complaining to companies that advertise there.

There are also steps that platforms can take to make it easier for users to respond to hate. MediaSmarts’ research has found that young people are more likely to respond to prejudice online if:

  • the site or app they’re using has clear and easy-to-use tools for reporting
  • there are clear rules about what is and isn’t acceptable
  • they know the website or app has punished users for unacceptable behaviour
  • they think most other users agree with them
  • they can report anonymously[31]

Online Hate In The Classroom

Devorah Heitner, founder of Raising Digital Natives, argues that we have a responsibility to address hate in the classroom. “When images and video documenting discriminatory behavior or hateful speech circulate in a community, kids need proactive guidance and support from educators. These conversations are difficult. But I would urge educators to open the discussion.”[32]

Students will often become defensive or try to divert the conversation if they feel that they are being blamed for racism or other forms of hate: they may claim that hate is no longer a problem, that it is isolated in specific places or overtly hateful groups. For that reason, it's important for teachers to be willing to engage in dialogues on difficult topics and to create an environment where no students feel blamed, patronized or condescended to.[33] Moreover, discussing these topics in class can allow students to see that hateful or prejudiced views that appear normalized in online spaces are not held by the majority of their peers.

Resources

Teachers can prepare for these difficult conversations by consulting MediaSmarts’ website section on Online Hate and Free Speech, our professional development workshop Facing Online Hate, and our mini-lesson Unpacking Privilege.

Outcomes for media literacy, which are included in all curricula across the country, can also provide an opportunity for deconstructing the methods and messaging of hate groups. Key concepts for digital media literacy that address how audiences negotiate meaning and how media products are constructions that may contain bias and ideological messages are particularly germane to this topic. Teachers can consult MediaSmarts’ Digital and Media Literacy Outcomes by Province and Territory to see how our materials on online hate fit curriculum expectations.

Media Literacy Week, co-partnered by MediaSmarts and the Canadian Teachers’ Federation, often encourages educators to tackle issues such as hate and online cultures of cruelty in their classrooms through annual themes that have included media stereotyping and representation, cyberbullying, ethical use of technology and digital citizenship.

Transnational initiatives related to hate and the internet include Safer Internet Day, organized by Insafe each February to promote safer and more responsible use of online technology and mobile phones amongst children and youth. The U.S. organization Teaching Tolerance offers an Anti-Bias Framework to provide teachers from kindergarten to Grade 12 with benchmarks and scenarios for promoting tolerance and countering hate.

 


[1] Gagliardone, I., Gal, D., Alves, T., & Martinez, G. (2015). Countering Online Hate Speech (Rep.). Paris: UNESCO.
[2] Cole, S. (2018, October 18). The iPhone's New Parental Controls Block Searches for Sex Ed, Allow Violence and Racism. Retrieved from https://motherboard.vice.com/en_us/article/8xj3bx/new-iphone-parental-controls-block-searches-for-sex-education
[3] Daniels, J. (2008). Race, Civil Rights, and Hate Speech in the Digital Era. Learning Race and Ethnicity: Youth and Digital Media (pp. 129-154). Cambridge, MA: MIT Press.; RCMP-GRC. (2011). Youth Online and at Risk: Radicalization Facilitated by the Internet. Ottawa: RCMP-GRC National Security Criminal Investigations Program.
[4] Collins, C. (2017) “What is the ‘Alt-Right’?” Teaching Tolerance 1.57
[5] Ghaffar Hussain and Erin Marie Saltman, Jihad Trending: A Comprehensive Analysis of Online Extremism and How to Counter It (London: Quilliam Foundation, 2014)
[6] Flanagin, Andrew J., Metzger, M. et al. (2010) Kids and Credibility: An Empirical Examination of Youth, Digital Media use and Information Credibility. MIT Press. http://mitpress.mit.edu/books/full_pdfs/Kids_and_Credibility.pdf
[7] Golebiewski, M., & Boyd, D. (2018). Data Voids: Where Missing Data Can Easily Be Exploited (Rep.). Data & Society.
[8] Braddock, K. (2018, August 21). The efficacy of communicative inoculation as counter-radicalization: Experimental evidence. Lecture presented at VOX-Pol Conference, Amsterdam.
[9] Rich, J. (2019, January 23). Schools must equip students to navigate alt-right websites that push fake news. Retrieved from https://theconversation.com/schools-must-equip-students-to-navigate-alt-right-websites-that-push-fake-news-97166
[10] Rich, J. (2019, January 23). Schools must equip students to navigate alt-right websites that push fake news. Retrieved from https://theconversation.com/schools-must-equip-students-to-navigate-alt-right-websites-that-push-fake-news-97166
[11] Collins, C. (2017) “What is the ‘Alt-Right’?” Teaching Tolerance 1.57
[12] Hussain, G., & Saltman E.M. (2014) Jihad Trending: A Comprehensive Analysis of Online Extremism and How to Counter It (Rep.) London: Quilliam Foundation.
[13] Resnick, B. (2017, March 07). The dark psychology of dehumanization, explained. Retrieved from https://www.vox.com/science-and-health/2017/3/7/14456154/dehumanization-psychology-explained
[14] Lerman, K., Yan, X., & Wu, X. (2016). The "Majority Illusion" in Social Networks. Plos One, 11(2). doi:10.1371/journal.pone.0147617
[15] Scott, M. (2019, January 27). Most Canadians have seen hate speech on social media: Survey. Montreal Gazette. Retrieved from https://montrealgazette.com/news/local-news/hate-speech-targets-muslims
[16] Brisson-Boivin, K (2019). “Pushing Back Against Hate Online.” MediaSmarts. Ottawa.
[17] Brisson-Boivin, K (2019). “Pushing Back Against Hate Online.” MediaSmarts. Ottawa.
[18] Paluck, E. L. (2011). Peer pressure against prejudice: A high school field experiment examining social network change. Journal of Experimental Social Psychology, 47(2), 350-358. doi:10.1016/j.jesp.2010.11.017
[19] Ablow, G. (2016, December 12). Talking Back to Hate Speech, Explained. Retrieved from https://billmoyers.com/story/talking-back-hate-speech-explained/
[20] Benesch, S., Ruths, D., Dillon, K. P., Saleem, H. M., & Wright, L. (2016). Considerations For Successful Counterspeech (Rep.). Dangerous Speech Project.
[21] Brisson-Boivin, K (2019). “Pushing Back Against Hate Online.” MediaSmarts. Ottawa.
[22] Benesch, S., Ruths, D., Dillon, K. P., Saleem, H. M., & Wright, L. (2016). Considerations For Successful Counterspeech (Rep.). Dangerous Speech Project.
[23] Brisson-Boivin, K (2019). “Pushing Back Against Hate Online.” MediaSmarts. Ottawa.
[24] Bartlett, J., & Miller, C. (2010). He Power of Unreason: Conspiracy Theories, Extremism and Counter-Terrorism (Rep.). London: Demos.
[25] Amend, A. (2018, February 10). Silicon Valley's Year in Hate. Retrieved from https://www.splcenter.org/fighting-hate/intelligence-report/2018/silicon-valleys-year-hate
[26] Chen, R. (2019, January 23). Social Media Is Broken, But You Should Still Report Hate. Retrieved from https://motherboard.vice.com/en_us/article/d3mzqx/social-media-is-broken-but-you-should-still-report-hate
[27] Chen, R. (2019, January 23). Social Media Is Broken, But You Should Still Report Hate. Retrieved from https://motherboard.vice.com/en_us/article/d3mzqx/social-media-is-broken-but-you-should-still-report-hate
[28] Fishbein, R. (2019, January 17). How to Identify and Report Hate Speech on Social Media. Retrieved from https://lifehacker.com/how-to-identify-and-report-hate-speech-on-social-media-1831018803
[29] Linton, C. (2018, March 04). Advertisers ask YouTube to pull ads from Alex Jones' channels. Retrieved from https://www.cbsnews.com/news/youtube-alex-jones-info-wars-channels-advertisers/
[30] Nicas, J. (2018, September 4). Alex Jones Said Bans Would Strengthen Him. He Was Wrong. The New York Times. Retrieved from https://www.nytimes.com/2018/09/04/technology/alex-jones-infowars-bans-traffic.html
[31] Brisson-Boivin, K (2019). “Pushing Back Against Hate Online.” MediaSmarts. Ottawa.
[32] Berkowicz, J., & Myers, A. (2016, December 02). Responding to Hate Speech and Bullying in the Digital Age. Retrieved from http://blogs.edweek.org/edweek/leadership_360/2016/12/responding_to_hate_speech_and_bullying_in_the_digital_age.html
[33] Johnson, J. R., Rich, M., & Cargile, A. C. (2008). “Why Are You Shoving This Stuff Down Our Throats?”: Preparing Intercultural Educators to Challenge Performances of White Racism. Journal of International and Intercultural Communication, 1(2), 113-135. doi:10.1080/17513050801891952