Instant Messaging (IM) is to young people what email is to their parents’ generation: the best way to communicate online (with the advantage that IM conversations take place in real time).

According to Project Teen Canada, 54 per cent of teenagers use cell phones daily. For parents, cell phones are an easy and practical way to stay connected to and keep tabs on their kids while giving them independence. But for young people, cell phones are much more than a tool for chatting with mom or dad – they’re an essential part of their social lives.

Teens and preteens are at the heart of the social Internet interacting with others through chat, instant messaging, social networking sites, in virtual worlds and online multi-player games. It is inevitable that at an age where young people are starting to explore their sexuality offline, they will do so online in these interactive environments as well.

When most people think about sexual risk and harm on the Internet, sexual predators come to mind. Because of its sensational nature, the spectre of unscrupulous adults preying upon and sexually exploiting kids online gets a lot of media attention. Although this does happen, sensational headlines do not help us understand the nature and true extent of the problem or how to deal with it effectively.

As adults, we want to foster resilience in young people, starting when they’re young. This can be done by teaching them how to handle harassing messages or requests that make them feel uncomfortable – on the Internet or in the schoolyard – and, as they get older, by teaching them how to spot and respond to emotional manipulation. The good news is that most teens are effectively handling online requests from strangers – the bigger challenge is helping them handle sexual advances from people they know.

The Responding to Online Hate guide assists law enforcement personnel, community groups and educators in recognizing and countering hateful content on the Internet – especially as it pertains to youth.

Fong [1], Guichard [2] and Hope [3], among others, have pointed out that current protocols for dealing with online hate have proven inadequate at managing hateful content and providing educational opportunities, largely because they have failed in adequately capturing the broad scope and complicated, disputed nature of online hate. Criminal legislation and formal policies have had limited success in addressing the complex issues related to crime in an online context and hate crime in general.

Traditional government responses to online hate have been to police cyberspace as an extension of the state’s territory, ignoring the online/offline divide.

The Internet has been rightly hailed as a groundbreaking interactive marketplace of ideas where anyone with the right hardware and software can set up a cyber-stall. It has become an essential means for people to access information and services but the downside of this unparalleled information exchange is that, alongside its many valuable resources, the Net also offers a host of offensive materials – including hateful content – that attempt to inflame public opinion against certain groups of people.

It is not always easy to discern when hateful content on the Internet crosses the line from being offensive to illegal. The line between hate speech and free speech is a thin one, and different countries have different levels of tolerance. The line is even thinner in digital environments where hateful comments posted lawfully in one country can be read in other countries where they may be deemed unlawful.