Responses and Solutions in the Classroom

There are two main strategies for addressing online hate and cultures of hatred in the classroom: teaching youth to recognize and deconstruct it, and empowering them to intervene by answering back to it.[1]

Digital media literacy education

While many schools and households rely on filtering software to protect youth from exposure to hate material, these programs aren’t a complete solution. Hate often comes in subtle forms, such as cloaked sites, that filters don’t pick up.[2] Teaching youth to think critically about all of the media they consume is needed in order to prepare them to recognize both overt and cloaked hate. Teaching young people about the techniques hate groups use to make their arguments – and the common elements of their ideologies – can help alert youth to “red flags” that show a source is trying to manipulate them or provide biased information.

Young people need to understand that the internet has no gatekeepers. They must learn to distinguish between biased, prejudicial material and fair, accurate information. Digital media literacy skills development has been called for by many authors[3] as an essential piece in any comprehensive approach to combating online hate. These skills enable young people to critically deconstruct images produced by hateful media, as well as provide an effective way of understanding multiple perspectives, in turn reducing racism, sexism and homophobia. Teachers can also draw on digital media literacy to show students “how the alt-right takes advantage of a 24-hour thirst for headlines and garners mainstream media coverage for memes, conspiracy theories and misinformation campaigns."[4]

Another one of the most crucial digital media literacy skills for combating hate is to know how to verify sources. [5] Hate groups put considerable effort into making their sites look legitimate by including many of the markers that youth use to determine credibility[6]: a dot-org Web address, quotes and citations from other sources (even if those are distorted, misquoted, made up or are just quoted from other hate groups), claims of expertise and an appealing and professional design.

Students also need to learn that the algorithms that generate search results, "trending topics" and recommendations of which video to watch next, don’t necessarily weight results by accuracy and reliability. Hate and conspiracy groups often attempt to manipulate these algorithms in various ways to their advantage, and while some platforms have taken steps to improve results, these are usually done in response to a single incident, rather than changing the algorithms to downrank hate content. Young people need to learn to examine the URL and snippet of each search result before clicking, actively search for videos rather than watching what's "up next" and to avoid placing too much attention on search engine rankings.  

As platforms have made it harder to manipulate algorithms, hate groups have turned to exploiting "data voids" – "search terms for which the available relevant data is limited, non-existent or deeply problematic." To take advantage of these, hate groups "encourage users to search for a topic for which the motivated manipulator knows that only one point of view will be represented."[7]

To deal with these, we should teach students to search based on the general topic rather than specifically for a phrase or term they have encountered. For instance, rather than searching for a specific claim about immigration (which will likely be weighted towards sources making that claim), searching with a more general phrase such as "immigration statistics" will likely produce better results.

It's also important to double-check sources of information before relying on them. Doing a search on the cloaked hate site National Policy Institute, for example, leads to a Wikipedia article identifying it as "a white supremacist think tank" that "lobbies for white supremacists and the alt-right." We also need to teach students the facts about topics where hate groups attempt to create doubt, such as the Holocaust or the history of slavery, to ensure that they don't have data voids for hate groups to exploit by telling them to "do the research and make up your own mind."

For more information on these digital media literacy topics, see MediaSmarts’ Break the Fake resource.

Another important general media literacy skill is recognizing bad-faith or dishonest arguments. These often appear in places like cloaked hate sites or in legitimate news sources that unknowingly repeat or amplify hate content. They can be very persuasive because they don’t rely on false information, but on selecting or presenting information in misleading ways.

Bad-faith arguments fall into three broad categories. The first is ones that are dishonest about the facts, such as presenting only facts that support your argument and leaving out those that don’t. The second category are ones that are dishonest about the issue, such as suggesting that an issue where there’s expert consensus is still under debate. Finally, there are ones that misrepresent the author’s true position or their stake in the issue. (The MediaSmarts lesson Thinking About Hate explores these arguments in more detail.)

Along with general digital media literacy skills, it’s also important to specifically teach what’s called extremism-related media literacy.[8]  For instance, we need to make young people aware that some people online are trying to manipulate them. We also need to be sure students know about the cloaked hate sites that these groups use to spread their message and help them to recognize the techniques they use – an approach which has been found to make people less likely to believe hate-based arguments and to see hate groups as less credible.[9]

As Jennifer Rich, Executive Director of Education for Genocide Watch, puts it, “teachers need to help students learn to recognize credible sources and not fall victim to alt-right sites that put forth propaganda. In order to combat the darkness in the world and on the web, teachers must have the knowledge and courage to teach about it directly.”[10]

Much of the hate content that young people encounter, however, doesn’t try to make rational arguments of any kind. It comes in the form of propaganda: memes, mocking videos, slogans and insinuations that defy critical analysis but provoke a powerful emotional reaction. Not all propaganda is hate propaganda, of course, and not all propaganda is necessarily bad, but because it tries to persuade you emotionally rather than through a rational argument we need to learn to recognize it and understand how it works.

When discussing hate content, as with any kind of misinformation, it’s important not to amplify the message. One effective way to do this is to present it with what’s called a “prebunking” that has three elements: first, a warning that you’re likely to encounter claims that try to fool or mislead you (this is essential to prevent the claim from taking hold when it’s introduced). Next, an introduction to the claim and, finally, a refutation showing that it’s wrong. To prebunk misleading claims about “Irish slaves,” for instance (which are often used to minimize the magnitude and severity of Black slavery in history)[11], a teacher might use the following:

  • Warning: “Some people spread a myth about the Irish in early America.”
  • Claim: “They argue that they were enslaved in the same way that Black people were.
  • Refutation: “But unlike Black slaves, indentured servants were free after their term and their children were free even if they were born during the indenture.”

Helping youth recognize the markers of hate content – in particular, messages that "Other" and dehumanize groups – can also make them less likely to be persuaded by it.[12] Studies have found that reflecting on how one's own group can't be viewed monolithically makes people less likely to dehumanize others.[13] Othering and dehumanization play an important role in the radicalization process because it is only by believing that a single, monolithic "they" are against you that politically-motivated violence can be seen as justified. While hate material aimed at the lower levels of the radicalization pyramid often obscure this element, teaching students about it can help them to recognize it even when it is couched in more reasonable-sounding language.

When students understand both propaganda and the ideologies of hate they are prepared to recognize identity propaganda, which is aimed at essentializing the differences between the in-group and the out-groups, othering out-groups and supporting the legitimacy of the in-group’s position.[14] This is why it’s important to teach students not just general critical thinking skills but also to recognize the distinctive ideologies of hate behind them.

For classroom resources on extremism-related media literacy, see MediaSmarts’ Facing Online Hate lesson series.

Combatting hate online

As well as teaching youth how to recognize hate online, it's important to empower youth to respond to it. This is important because how we respond to hate or prejudice has a powerful effect on perceptions of what is normal and acceptable in society; this effect is stronger online, where the loudest voices are often taken as the majority.[15] As a result, when we don't respond to hate and prejudice online we contribute to the sense that hateful sentiments or prejudiced beliefs are a part of the social norms of the community. "The majority of us are in favour of curbing hate speech but we’re very exposed to it so we don’t know what to do about it," according to Jack Jedwab, president of the Association for Canadian Studies. The result "is to make people more indifferent to these things because it renders them more banal. We become desensitized to it."[16]

 MediaSmarts' own research supports this: six in 10 Canadian youth say that they would be more likely to push back against hate online if they’d seen someone else do it.[17]

It's important not just to encourage youth to respond to hate, but to train them in how to do so safely and effectively. Half of Canadian youth say they don't speak out because they don't know what to say or do.[18] Research has found that this can not only make young people more likely to confront prejudice, but that empowered students serve to encourage their friends and peers to act as well.[19]

Experts on responding to hate have identified a few actions that are most likely to help without doing harm:

  • Be a good witness by recording the incident, in case you or others want to report it.
  • Be an ally to anyone being targeted by hate speech. As with cyberbullying, letting people know you're on their side can be powerful in reducing the impact of hate.
  • Speak out without escalating things.[20]

Speaking out against hate – sometimes called counterspeech – is one of the most powerful but also riskiest responses. The Dangerous Speech Project has found that some types of counterspeech are more likely to be effective than others at either changing a person's behaviour, communicating that hate isn’t part of the social norms of the community, or both:[21]

  • Telling the speaker that what they’re saying does harm. In an online environment, it can be hard to remember that actions have consequences, so it can be valuable to remind those engaging in low-level hate or prejudiced speech that it does have an effect, even if they're just "joking around." MediaSmarts' research found that one of the factors that would make youth most likely to intervene when they witness prejudice online is if someone they know tells them they were hurt by it.[22]
  • Deflecting the conversation. This isn’t a long-term solution, but can be effective in defusing a hate situation that is getting more severe.
  • Appealing to shared identity or shared values. Research has found that people are more likely to respond to counterspeech if they feel that the person they're speaking to shares an important part of their identity.
  • Using humour. Though it's important not to make light of hate speech, absurdist humour can reduce the credibility of a group that wants to be seen as powerful and dangerous, and can make an argument more persuasive to others in the community.

The Dangerous Speech Project notes that these strategies are less likely to change the mind of someone who is already deeply committed to hate, but even then they communicate to less committed people in the audience that hate is not the norm.[23] This is essential because MediaSmarts' research found that youth are more likely to intervene if they are confident that their friends and other users feel the same way they do.[24] It’s also important to remember that the purpose of counterspeech is often not to persuade the person you are responding to, but your shared audience.[25]

Other research has identified the following best practices when engaging in counterspeech:

  • Do not use insults, hateful tropes or language;
  • Focus on logical arguments;
  • Ask other to provide evidence for claims;
  • Say that you will report threatening or harassing conduct to police or the platform where it happens; and
  • Encourage others to engage in counter-speech.[26]

For best practices on responding to mis- or disinformation, see MediaSmarts’ Check First, Share After program.

  • MediaSmarts’ My Voice is Louder Than Hate lesson series is designed to expose the “majority illusion” that can make prejudice seem normal online and aims to empower youth to be the “noisy 10 percent” that sets their communities’ values. Based on our research into young Canadians’ experiences pushing back against hate online, it focuses on building students’ empathy and efficacy – first, by showing them that online hate hurts everyone who witnesses it, and second by providing them with a range of options to respond and opportunities to practice them.

Reporting hate

Another means of fighting hate is to report it to the service or site that hosts the content. Political scientist P.W. Singer has advocated a "public health" approach to online hate, “creating firebreaks to misinformation and spreads of attacks that target their customers” and ‘deplatforming’ proven ‘superspreaders’ of harassment.”[27]

“When you encounter groups that you find to be in violation of platform policy or that are very toxic, it is useful for you to report them,” according to Kat Lo, a researcher on online communities.[28] Evan Balgord, executive director of the Canadian Anti-Hate Network, points out that a user’s report can add important context that an algorithm or a paid moderator might not recognize, such as in-jokes or coded words or phrases.[29]

Youth shouldn't assume, though, that their report made no difference if they don't get a response. Zahra Billoo, executive director of the San Francisco Bay Area branch of the Council on American-Islamic Relations, suggests that “If the platform doesn’t take action, you can make multiple reports. If a single user is reported on multiple times, or if multiple users report a single user, public awareness can move a platform to take action if one report wouldn’t. These are companies that respond to customers.”[30]

Consumer activism can be another effective way of responding to hate online. Nearly every major online platform relies on advertising for revenue. Pressure on advertisers – and the resulting pressure on platforms from advertisers – is credited with, among others, removing the conspiracy theorist Alex Jones from every major platform,[31] which significantly reduced his ability to reach mainstream audiences.[32] Youth should be aware of the power they have as consumers and also taught that if they see hate content on an online platform, they can often make a real difference by complaining to companies that advertise there.

There are also steps that platforms can take to make it easier for users to respond to hate. MediaSmarts’ research has found that young people are more likely to respond to prejudice online if:

  • the site or app they’re using has clear and easy-to-use tools for reporting
  • there are clear rules about what is and isn’t acceptable
  • they know the website or app has punished users for unacceptable behaviour
  • they think most other users agree with them
  • they can report anonymously.[33]

Online hate in the classroom

Devorah Heitner, founder of Raising Digital Natives, argues that we have a responsibility to address hate in the classroom: “when images and video documenting discriminatory behavior or hateful speech circulate in a community, kids need proactive guidance and support from educators. These conversations are difficult. But I would urge educators to open the discussion.”[34]

Students will often become defensive or try to divert the conversation if they feel they’re being blamed for racism or other forms of hate. They may claim that hate is no longer a problem, that it’s isolated in specific places or overtly hateful groups. For that reason, it's important for teachers to be willing to engage in dialogues on difficult topics and to create an environment where no students feel blamed, patronized or condescended to.[35] Moreover, discussing these topics in class can allow students to see that hateful or prejudiced views that appear normalized in online spaces aren’t held by the majority of their peers.

As well, teachers should try to identify hate messages that can be prebunked in their courses – such as the Irish slaves myth discussed above or similar myths about the Middle Ages (for history classes), “scientific racism” (in science or biology classes) or ways in which statistics can be misused and manipulated to promote fear of other groups in mathematics.

Content warnings and dealing with potentially harmful content

Teachers who want to address hate material in class may, rightly, be concerned about the emotional impact it can have on their students. While the current state of scholarship suggests that content warnings on their own provide no benefit and may reinforce harm[36] or lead to slightly more negative experiences if used in isolation,[37] some studies – along with research done in related fields, such as content moderation and extremism research – have identified ways in which content warnings are likely to provide a positive benefit:

  • It’s not enough to warn or alert participants. Alternative materials and activities which do not contain the possibly harmful content must be provided. These alternatives must be made plainly available and it must be made clear that participants won’t be penalized in any way for choosing them.[38]
  • Participants should also be allowed to leave the activity at any time, provided with resources for mental health support and encouraged to turn to them if they feel any distress. These should be provided both in print and electronically, if possible.[39]
  • Content warnings should be specific, describing the potentially harmful content as clearly as possible without including any harmful material.[40] They should be provided far enough before the harmful material that participants are able to make an informed decision.[41]
  • If it isn’t possible or preferable to allow participants to opt out (for instance, if the material is a required part of the curriculum) then do not provide a content warning,[42] but do follow the other best practices below.[43]
  • Whether or not you allow students to opt out of an activity, if you’re presenting material that students may find emotionally harmful (either as part of a lesson on online hate or related topics, such as media stereotyping, or because you’re using course materials that contain such content), you can follow these best practices:
  • Before presenting potentially harmful material, facilitators should establish the norms and values under which the discussion of the material will take place. If feasible, these should be negotiated through discussion with the group.[44] (The Complicated Conversations in the Classroom guide has tips on how to do this.)
  • When content is presented that may be harmful to some students but is also considered to be of artistic or cultural importance, or which some or most participants may view positively overall, acknowledge the tension between these elements and express respect for participants’ tastes and opinions, making clear that it’s possible to enjoy a work that has problematic aspects without apologizing for those elements. Foreground participants’ personal reactions to the work.[45] (See the MediaSmarts lesson Calling in Versus Calling Out for a fuller exploration of this idea.)
  • When content with a high likelihood of harm (such as hate content) is presented, it should be done in a way that visually distinguishes it from other content, such as a watermark or a particular text colour. Reproduction of potentially harmful content should be limited to the briefest and most relevant parts.[46] Visual material that is potentially harmful should be given a graphic treatment that reduces its impact, such as presenting it with an overlay or in grayscale.[47] Written material that is potentially harmful, such as slurs, should be blurred or starred.[48]
  • Clearly disclaim and denounce the potentially harmful content. Any claims made within the potentially harmful content should be explicitly flagged as false and countered as near or as soon as possible.[49]
  • Check in with participants throughout the session and consider breaking up the activity in order to provide participants with opportunities to reflect, decompress or practice self-care.[50] Consider breaking the activity or lesson up (such as by scheduling other activities in between parts of it).
  • Teachers should conduct a debrief with participants at the end of the activity.[51] The debrief should provide participants with an opportunity for closure, help them address the emotional intensity of the activity and remind them of the available supporting resources.[52]

Resources

Teachers can prepare for these difficult conversations by consulting MediaSmarts’ guide Complicated Conversations in the Classroom as well as our article Can Media Literacy Backfire?

Other resources:


[1] Gagliardone, I., Gal, D., Alves, T., & Martinez, G. (2015). Countering Online Hate Speech (Rep.). Paris: UNESCO.

[2] Cole, S. (2018, October 18). The iPhone's New Parental Controls Block Searches for Sex Ed, Allow Violence and Racism. Retrieved from https://motherboard.vice.com/en_us/article/8xj3bx/new-iphone-parental-controls-block-searches-for-sex-education

[3] Daniels, J. (2008). Race, Civil Rights, and Hate Speech in the Digital Era. Learning Race and Ethnicity: Youth and Digital Media (pp. 129-154). Cambridge, MA: MIT Press.; RCMP-GRC. (2011). Youth Online and at Risk: Radicalization Facilitated by the Internet. Ottawa: RCMP-GRC National Security Criminal Investigations Program.

[4] Collins, C. (2017) “What is the ‘Alt-Right’?” Teaching Tolerance 1.57

[5] Ghaffar Hussain and Erin Marie Saltman, Jihad Trending: A Comprehensive Analysis of Online Extremism and How to Counter It (London: Quilliam Foundation, 2014)

[6] Flanagin, Andrew J., Metzger, M. et al. (2010) Kids and Credibility: An Empirical Examination of Youth, Digital Media use and Information Credibility. MIT Press. https://direct.mit.edu/books/oa-monograph/3184/Kids-and-CredibilityAn-Empirical-Examination-of

[7] Golebiewski, M., & Boyd, D. (2018). Data Voids: Where Missing Data Can Easily Be Exploited (Rep.). Data & Society.

[8] Nienierza, A., Reinemann, C., Fawzi, N., Riesmeyer, C., & Neumann, K. (2021). Too dark to see? Explaining adolescents’ contact with online extremism and their ability to recognize it. Information, Communication & Society, 24(9), 1229-1246.

[9] Braddock, K. (2018, August 21). The efficacy of communicative inoculation as counter-radicalization: Experimental evidence. Lecture presented at VOX-Pol Conference, Amsterdam.

[10] Rich, J. (2019, January 23). Schools must equip students to navigate alt-right websites that push fake news. Retrieved from https://theconversation.com/schools-must-equip-students-to-navigate-alt-right-websites-that-push-fake-news-97166

[11] Hogan, L. (2016). Two years of the ‘Irish slaves’ myth: Racism, reductionism and the tradition of diminishing the transatlantic slave trade. Open Democracy.

[12] Hussain, G., & Saltman E.M. (2014) Jihad Trending: A Comprehensive Analysis of Online Extremism and How to Counter It (Rep.) London: Quilliam Foundation.

[13] Resnick, B. (2017, March 07). The dark psychology of dehumanization, explained. Retrieved from https://www.vox.com/science-and-health/2017/3/7/14456154/dehumanization-psychology-explained

[14] Reddi, M., Kuo, R., & Kreiss, D. (2021). Identity propaganda: Racial narratives and disinformation. New Media & Society, 14614448211029293

[15] Lerman, K., Yan, X., & Wu, X. (2016). The "Majority Illusion" in Social Networks. Plos One, 11(2). doi:10.1371/journal.pone.0147617

[16] Scott, M. (2019, January 27). Most Canadians have seen hate speech on social media: Survey. Montreal Gazette. Retrieved from https://montrealgazette.com/news/local-news/hate-speech-targets-muslims

[17] Brisson-Boivin, K (2019). “Pushing Back Against Hate Online.” MediaSmarts. Ottawa.

[18] Brisson-Boivin, K (2019). “Pushing Back Against Hate Online.” MediaSmarts. Ottawa.

[19] Paluck, E. L. (2011). Peer pressure against prejudice: A high school field experiment examining social network change. Journal of Experimental Social Psychology, 47(2), 350-358. doi:10.1016/j.jesp.2010.11.017

[20] Ablow, G. (2016, December 12). Talking Back to Hate Speech, Explained. Retrieved from https://billmoyers.com/story/talking-back-hate-speech-explained/

[21] Benesch, S., Ruths, D., Dillon, K. P., Saleem, H. M., & Wright, L. (2016). Considerations For Successful Counterspeech (Rep.). Dangerous Speech Project.

[22] Brisson-Boivin, K (2019). “Pushing Back Against Hate Online.” MediaSmarts. Ottawa.

[23] Benesch, S., Ruths, D., Dillon, K. P., Saleem, H. M., & Wright, L. (2016). Considerations For Successful Counterspeech (Rep.). Dangerous Speech Project.

[24] Brisson-Boivin, K (2019). “Pushing Back Against Hate Online.” MediaSmarts. Ottawa.

[25] Buerger, C. (2021). Counterspeech: A literature review. Available at SSRN 4066882.

[26] Williams, M. (2019). Hatred behind the screens: A report on the rise of online hate speech.

[27] Amend, A. (2018, February 10). Silicon Valley's Year in Hate. Retrieved from https://www.splcenter.org/fighting-hate/intelligence-report/2018/silicon-valleys-year-hate

[28] Chen, R. (2019, January 23). Social Media Is Broken, But You Should Still Report Hate. Retrieved from https://motherboard.vice.com/en_us/article/d3mzqx/social-media-is-broken-but-you-should-still-report-hate

[29] Chen, R. (2019, January 23). Social Media Is Broken, But You Should Still Report Hate. Retrieved from https://motherboard.vice.com/en_us/article/d3mzqx/social-media-is-broken-but-you-should-still-report-hate

[30] Fishbein, R. (2019, January 17). How to Identify and Report Hate Speech on Social Media. Retrieved from https://lifehacker.com/how-to-identify-and-report-hate-speech-on-social-media-1831018803

[31] Linton, C. (2018, March 04). Advertisers ask YouTube to pull ads from Alex Jones' channels. Retrieved from https://www.cbsnews.com/news/youtube-alex-jones-info-wars-channels-advertisers/

[32] Nicas, J. (2018, September 4). Alex Jones Said Bans Would Strengthen Him. He Was Wrong. The New York Times. Retrieved from https://www.nytimes.com/2018/09/04/technology/alex-jones-infowars-bans-traffic.html

[33] Brisson-Boivin, K (2019). “Pushing Back Against Hate Online.” MediaSmarts. Ottawa.

[34] Berkowicz, J., & Myers, A. (2016, December 02). Responding to Hate Speech and Bullying in the Digital Age. Retrieved from http://blogs.edweek.org/edweek/leadership_360/2016/12/responding_to_hate_speech_and_bullying_in_the_digital_age.html

[35] Johnson, J. R., Rich, M., & Cargile, A. C. (2008). “Why Are You Shoving This Stuff Down Our Throats?”: Preparing Intercultural Educators to Challenge Performances of White Racism. Journal of International and Intercultural Communication, 1(2), 113-135. doi:10.1080/17513050801891952

[36] Bridgland, V. M., & Takarangi, M. K. (2021). Danger! Negative memories ahead: The effect of warnings on reactions to and recall of negative memories. Memory, 29(3), 319-329.

[37] Bridgland, V., Jones, P. J., & Bellet, B. W. (2022). A meta-analysis of the effects of trigger warnings, content warnings, and content notes.

[38] Dickman-Burnett, V. L., & Geaman, M. (2019). Untangling the trigger-warning debate. Journal of thought, 53(3/4), 35-52.

[39] Vanner, C., & Almanssori, S. (2021). ‘The whole truth’: student perspectives on how Canadian teachers should teach about gender-based violence. Pedagogy, Culture & Society, 1-20.

[40] Dickman-Burnett, V. L., & Geaman, M. (2019). Untangling the trigger-warning debate. Journal of thought, 53(3/4), 35-52.

[41] Derczynski, L., Kirk, H. R., Birhane, A., & Vidgen, B. (2022). Handling and presenting harmful text. arXiv preprint arXiv:2204.14256.

[42] Bridgland, V., Jones, P. J., & Bellet, B. W. (2022). A meta-analysis of the effects of trigger warnings, content warnings, and content notes.

[43] Derczynski, L., Kirk, H. R., Birhane, A., & Vidgen, B. (2022). Handling and presenting harmful text. arXiv preprint arXiv:2204.14256.

[44] Dickman-Burnett, V. L., & Geaman, M. (2019). Untangling the trigger-warning debate. Journal of thought, 53(3/4), 35-52.

[45] Hartford, K. L. (2016). Beyond the trigger warning: Teaching operas that depict sexual violence. Journal of Music History Pedagogy, 7(1), 19-34.

[46] Derczynski, L., Kirk, H. R., Birhane, A., & Vidgen, B. (2022). Handling and presenting harmful text. arXiv preprint arXiv:2204.14256.

[47] Karunakaran, S., & Ramakrishan, R. (2019, October). Testing stylistic interventions to reduce emotional impact of content moderation workers. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (Vol. 7, pp. 50-58).

[48] Derczynski, L., Kirk, H. R., Birhane, A., & Vidgen, B. (2022). Handling and presenting harmful text. arXiv preprint arXiv:2204.14256.

[49] Derczynski, L., Kirk, H. R., Birhane, A., & Vidgen, B. (2022). Handling and presenting harmful text. arXiv preprint arXiv:2204.14256.

[50] Dickman-Burnett, V. L., & Geaman, M. (2019). Untangling the trigger-warning debate. Journal of thought, 53(3/4), 35-52.

[51] Derczynski, L., Kirk, H. R., Birhane, A., & Vidgen, B. (2022). Handling and presenting harmful text. arXiv preprint arXiv:2204.14256.