As Subrahmanyam  indicates, it is important to implement policy to control harmful, negative and distracting uses of electronic media while simultaneously preserving the ability of this technology to significantly contribute to education and social development.
Countering Internet-based hate crimes was also one of the goals of Canada’s Action Plan Against Racism, a 2005 collaboration of the federal government departments of Canadian Heritage, Citizenship and Immigration Canada, Human Resources and Skills Development Canada, and Justice Canada. As a result of this initiative, training materials have been created for front-line victim services workers and other initiatives have been suggested. In her report for the Department of Justice, “Combating Hate on the Internet: Current Canadian Efforts and the Recommendations of Non-Governmental Organizations to Improve Upon Them,” Andrea Slane identifies a number of strategies for improving Canada’s ability to address online hate: tip lines for reporting online hate sites, court-ordered bans or blocks of offensive materials, voluntary blocking, improvements to the Canadian Human Rights Commission’s Internet-based hate propaganda complaints process, improvements for law enforcement, and funding for community groups and education are among the solutions suggested by various stakeholder groups. 
Some police forces maintain units or task forces which specialize in dealing with hate. The Ontario Provincial Police has a Provincial Antiterrorism and Hate Crimes Unit and British Columbia has a specialized Hate Crime Team with two full-time police hate crime specialists. As well, some municipalities such as Calgary, Ottawa, Toronto, Vancouver and Winnipeg have their own dedicated hate crime units. If such a unit does not exist, a complaint can be made to the local police. (It is advisable to attach a copy of the offending material to the letter of complaint.) Any hate found on a Canadian website is now subject to Canadian legislation, although these regulations do not govern content hosted on servers outside of Canada. Tools such as www.whois.net and FlagFox, an add-on for the Web browser Firefox, permit users to identify the ISP and home country of any website.
A number of sites exist to monitor and document illegal material on the Internet. Some notable examples are Franklin’s Hate Directory; B’nai Brith Canada’s League for Human Rights, which hosts a hate hotline; and the Simon Wiesenthal Center, which has identified thousands of offensive websites. The U.S.-based Simon Wiesenthal Center and the Anti-Defamation League both recommend that Canadian Internet users alert them to any online hate, so they can try to get the offending material removed.
The Alberta Hate and Bias Crime and Incidents Committee was appointed to determine the status of hate and bias crimes within Alberta. In a 2007 report the Committee noted that while a number of non-governmental organizations have initiatives to fight and report incidents of both on- and offline hate, many of these groups lack the resources to ensure education and awareness are efficiently spread. The Committee lists B’nai Brith, Canada’s League for Human Rights, the Canadian Jewish Congress, Friends of the Simon Wiesenthal Center, the Centre for Research-Action on Race Relations, and the Canadian Race Relations Foundation as all having educational programs that teach human rights protection for all Canadians. Further initiatives are urged, however, such as the creation of police hate crime squads in provinces where they have not been established and the development of public awareness programs on hate crime.
Reporting Online Hate
Another means of fighting hate is to report it to the service or site that hosts the content. Many Internet Service Providers, for instance, will remove hate content that is hosted on their servers once they’re made aware of it. In Canada, the Canadian Association of Internet Providers directs users to Cybertip.ca, a tipline for reporting the online sexual exploitation of children. However, ISPs do not have the power to determine what material is illegal, and most are therefore reluctant to remove suspect content from their servers without official direction from a law enforcement agency. As well, while ISPs may choose to remove hate material, the fact that most hate groups are hosted on servers in the United States, where there are almost no legal limitations on hate speech, means that there is very little that can be done to make them remove it
Many sites that house user-generated content do have formal reporting mechanisms for reporting hate content, such as the ability to “report” a page or profile on Facebook or to “flag” a video on YouTube. Because of the huge volume of content on these sites, they rely on users to alert them to hate material. Each of these sites has its own standard for what will be removed: Facebook, for instance, has removed pages that call for violence but has refused to take down pages associated with hate groups or Holocaust denial material. File a complaint with the Canadian Human Rights Commission
The Commission’s 2002 decision forcing Holocaust denier Ernst Zundel to “cease and desist” from publishing hate material on his Web site was a landmark. It affirmed that the Commission does have the right to receive complaints and make decisions about hate material on the Internet, as well as in telephone communications. If the Commission decides that material falls into the legal definition of hate, it appoints an investigator to research the matter and determine whether the case should be pursued. Such a case may be brought before a Human Rights Tribunal, which has the legal authority to subpoena witnesses.
Combatting Online Hate
The presence of digital media such as blogs, Wikis and text messaging have generated new forums for participation within hate cultures, particularly among youth and present unique problems for responding to online hate.
While many schools and households rely on filtering software to protect youth from exposure to hate material, these programs are not a complete solution: hate often comes in subtle forms, such as cloaked sites, that filters do not pick up. The most effective long-range strategy for helping young people is to give them plenty of information about online hate, as well as the critical thinking skills to decode messages of hate and read between the lines. Young people need to understand that the Internet has no gatekeepers and must learn to distinguish between biased, prejudicial material and fair, accurate information.
Digital literacy skills development has been called for by many authors  as an essential piece in any comprehensive approach to combating online hate. These skills enable young people to critically deconstruct images produced by hateful media, as well as provide an effective way of understanding multiple perspectives, in turn reducing racism, sexism and homophobia.
One of the most crucial critical digital literacy skills for combating hate is to know how to verify sources. Hate groups put considerable effort into making their sites look legitimate by including many of the markers that youth use to determine credibility: a dot-org Web address, quotes and citations from other sources (even if those are distorted, misquoted, made up or are just quoted from other hate groups), claims of expertise (nearly every author on hate sites is a “doctor”, and many groups call themselves “institutes”), and an appealing and professional design. 
Youth also place a great deal of importance on search engine rankings, usually clicking on the first search result without checking other hits. Unfortunately, the algorithms used by search engines are not necessarily designed to provide the most reliable information: for example, for many years the top hit on Google for the term “Jew” was the anti-semitic group Jewwatch. (It’s now typically number two, thanks to the increased popularity of Wikipedia.)
We can teach youth a wide variety of skills for verifying sources, such as doing a link search to find out who links to a site or a Whois search to show who owns a domain. Even doing a Google search on a source can reveal if it’s considered to be reliable: for instance, when we do a search for “National Policy Institute” the second hit after the site itself is the Southern Poverty Law Center, which identifies it as an a leading source of “academic racism”.
Teaching youth to think critically about all the media they consume is needed in order to prepare them to recognize both overt and cloaked hate, and teaching young people about the techniques hate groups use to make their arguments – and the common elements of their ideologies – can help alert them to “red flags” that show a source is trying to manipulate them or provide biased information.
Online Hate In The Curriculum
In British Columbia, Alberta, Saskatchewan, Manitoba, Ontario, Nunavut, Northwest Territories and Yukon there are no specific provincial/territorial curricular requirements or provisions directly related to hateful media or online hate, although online hate has been addressed at a Ministry level in Ontario as part of the memorandum on Developing and implementing equity and inclusive education policies in Ontario schools. However, curricular requirements in many of these provinces/territories (particularly Ontario and the western provinces) do stress general media literacy and allow for flexibility in how general media literacy skills are taught. Broad phrasing allows for issues relating to online hate to be integrated into lessons with ease, even if online hate education is not a mandated aspect of the curriculum.
Outcomes for media literacy, which are included in all curricula across the country, can provide an opportunity for deconstructing the methods and messaging of hate groups. Key concepts for media literacy that address how audiences negotiate meaning and how media products are constructions that may contain bias and ideological messages are particularly germane to this topic.
Development of authentication skills – which are an essential part of assessing online content – are also included across the curriculum.
Curricular requirements for New Brunswick, Nova Scotia, Prince Edward Island, and Newfoundland and Labrador refer to online hate more directly. These include evaluating advertising, employing critical thinking skills to analyze media and confronting issues of power, control and distribution.
In Prince Edward Island, educator Maureen Kerr has developed an after-school program that uses social media to teach children about privacy, safety and other critical online issues. ‘CoolTools’ for the Classroom is based on critical skills required for 21st century learning, including collaboration, creating, communicating and connecting.
The Alberta Hate Crimes Committee, founded in 2001, works to bring together law enforcement, government, non-profit, community, and university members in order to strategically address hate crimes and prejudice in Alberta. In Yukon, the Yukon Education Student Network provides links to some hate and media information.
Existing non-governmental initiatives to address online hate and promote digital literacy are not uniform across Canada: websites are often unprofessional and under-maintained and many previous organizations (such as Youth Reach Out Against Racism) no longer exist. There are no significant active non-governmental hate media education initiatives in Newfoundland and Labrador, Nova Scotia, New Brunswick, Saskatchewan, Manitoba, Northwest Territories, or Nunavut, although there are some promising non-governmental initiatives in other provinces regarding online hate and education.
Canada’s national Media Literacy Week initiative, co-partnered by MediaSmarts and Canadian Teachers’ Federation, often encourages educators to tackle issues such as hate and online cultures of cruelty in their classrooms through annual themes that have included media stereotyping and representation, cyberbullying, ethical use of technology and digital citizenship.
Transnational initiatives related to hate and the Internet include Safer Internet Day, organized by Insafe each February to promote safer and more responsible use of online technology and mobile phones amongst children and youth.
 Fong, A., Hui, S., & Lee, P. (2005). An Intelligent Categorization Engine for Bilingual Web Content Filtering. IEEE Transactions on Multimedia, 7(6), 1183-1190.
 Guichard, A. (2009). Hate Crime in Cyberspace: the Challenges of Substantive Criminal Law. Information & Communications Technology Law, 18(2), 201-234.
 Hope, A. (2006). School Internet Use, Youth and Risk: a Socio-Cultural Study of the Relation Between Staff Views of Online Danger and Students’ Ages in UK Schools. British Educational Research Journal, 32(2), 307-329.
 Subrahmanyam, K. (2008). Online Communication and Adolescent Relationships. Future of Children, 18(1), 119-146.
 Slane, A. (2007). Combating Hate on the Internet: Current Canadian Efforts and the Recommendations of Non-Governmental Organizations to Improve upon Them. Ottawa: Department of Justice Canada.
 Daniels, J. (2008). Race, Civil Rights, and Hate Speech in the Digital Era. Learning Race and Ethnicity: Youth and Digital Media (pp. 129-154). Cambridge, MA: MIT Press.; RCMP-GRC. (2011). Youth Online and at Risk: Radicalization Facilitated by the Internet. Ottawa: RCMP-GRC National Security Criminal Investigations Program.
Flanagin, Andrew J., Metzger, M. et al. (2010) Kids and Credibility: An Empirical Examiniation of Youth, Digital Media use and Information Credibility. MIT Press. http://mitpress.mit.edu/books/full_pdfs/Kids_and_Credibility.pdf
Diversity in Media Toolbox
The Diversity and Media Toolbox is a comprehensive suite of resources that explores issues relating to stereotyping, bias and hate in mainstream media and on the Internet. The program includes professional development tutorials, lesson plans, interactive student modules and background articles.