Reporting Platforms: Young Canadians Evaluate Efforts to Counter Disinformation

Reporting platforms report cover

MediaSmarts facilitated online focus groups with youth ages 16 to 29 to examine and assess reporting processes on popular apps (Instagram, TikTok, Twitter, and YouTube). More specifically, we wanted to understand how young Canadians feel about current efforts to counter misinformation and disinformation online and what solutions they have regarding the problems and concerns they experience while navigating online information ecosystems and communities.

Overall, youth expressed a lack of trust and confidence in the ability of platforms to counter misinformation and disinformation and to keep them informed and safe online. Participants emphasized that users are not provided with meaningful opportunities to engage with platforms to prevent and address misinformation and disinformation, and they felt platforms must make design changes that prioritize values of accuracy, transparency, trust, responsibility, and safety.

For more information:


This project, funded by the Department of Canadian Heritage, continues MediaSmarts' tradition of youth-engaged research by positioning young Canadians as experts to be actively involved in assessing current approaches and designing new policies, interventions, and tools to mitigate the spread of misinformation and disinformation in online spaces.

Reporting and Evaluating Platforms – What We Heard:

Participants highlighted the frequency of encountering misinformation and disinformation on various social media apps and other platforms. Most indicated they had encountered misleading or false information online within a couple of weeks of the focus group sessions.

  • "I feel like I see it all the time, and I have to be careful, especially on TikTok." (16- to 18-year-old)
  • "There's a lot of misinformation on Twitter… in the comments section… people posting things that are very misleading." (16- to 18-year-old)
  • "WhatsApp is a huge source of misinformation among family." (23- to 29-year-old)

When we asked what participants do when they see misinformation or disinformation, they shared several strategies. For example, some youth indicated that they tend to ignore the posts or content – either because it takes up too much time and energy to report or because they do not want to draw more attention to the post or the creator who shared it:

  • "Generally, I ignore it… unless it's something that is harmful." (16- to 18-year-old)
  • "Engagement breeds more engagement and saps your time and energy." (23- to 29-year-old) 

Other participants said they sometimes attempt to verify the information because they want to know for sure whether it is 'propaganda' or 'fake':

  • "I always have to look things up after to make sure they're actually true or not." (16- to 18-year-old)
  • "I try to dig out the truth and know what's going on. I believe some disinformation is trying to find out the truth or know what's really happening, so I can't just ignore the information. If it's in my control, I try to find the truth because maybe there's a reason." (23- to 29-year-old)

Some youth indicated a preference for a more relational approach to verifying posts. Older participants said they are more likely to engage with misleading or false posts if it originates from someone close to them:

  • "I try to let [my family] know it's misinformation. Give them a counterpoint to what they send." (23- to 29-year-old)
  • "If it is shared by people I personally know, I talk to them and explain that it's false." (23- to 29-year-old)
  • "For someone in my personal circle, I'll talk to them in real life, not on platforms." (23- to 29-year-old)

A few participants said that they report misinformation and disinformation, especially when it includes inappropriate content or scams, and while they see value in doing so, they are uncertain about the impact. Most explained that it is important for platforms to take on this role of moderating or flagging content because of the potential harmful impacts of misleading and false information and the need to ‘protect users.’

  • "It's important for platforms to flag the information to stop the spread." (16- to 18-year-old)
  • "In my opinion, it is important for platforms to counter misinformation and disinformation because of how misleading this information has been to people that are not sufficiently informed." (16- to 18-year-old)
  • "Anybody can be impacted [by misinformation and disinformation], but it's especially important for vulnerable populations. Like, the elderly may be susceptible to thinking something is real information… Not only elders, but especially children. Young children use platforms where misinformation spreads fast – teaching them how to navigate this is important."(23- to 29-year-old)

Finally, while youth expressed a desire to see platforms play a more prominent role in countering misinformation and disinformation, they were doubtful that platforms would do so, and some participants questioned how well platforms were flagging information and removing the right content. In response, some emphasized wanting to have ‘actual people’ monitoring, moderating, and flagging content on social media platforms.

  • "Yes, but it's also important to understand how algorithms are affecting the flagging. Who is deciding what is misinformation or disinformation for each user?" (23- to 29-year-old)
  • "If I had these billion-dollar companies, it would be worthwhile to hire people to make sure the content is safe and appropriate before they post it." (16- to 18-year-old)
  • "Actual people should review content – put it in the hands of people." (19- to 22-year-old)

During the evaluation activity, we asked youth if they were familiar with current reporting tools and processes on four platforms: Instagram, TikTok, Twitter, and YouTube. Responses varied across platforms and age groups. While some participants were aware of platform tools for flagging misinformation and disinformation, others were not.

We also asked whether they think existing tools clearly define misinformation and disinformation, if the reporting process is easy to navigate, and whether the approach provides users with a meaningful process for combatting misinformation and disinformation on the platform. Opinions were split here as well, and varied by platform. For example, participants who evaluated TikTok agreed that the platform gives users clear definitions of misinformation and disinformation; however, those who evaluated Instagram did not feel the same. Many participants who evaluated YouTube said that the reporting process is easy enough to navigate but did not think that it is a meaningful way to combat false or misleading content on this platform. While all platforms inform users about what will happen after they report content (albeit to varying degrees and with varying detail), participants usually wanted more information about how reports are assessed and how platforms make decisions.

At the end of each focus group session, we asked participants to reflect on whether they think online platforms are doing enough to counter misinformation and disinformation and if they trust platforms to keep them informed and safe online. The consensus was ‘no.’

  • "I would hope that they [platforms] can do a bit more in the future so that I can trust them for safety online." (16- to 18-year-old)
  • "Platforms could do better."(19- to 22-year-old)
  • "I do not believe that platforms are doing enough, and I think that most social media users are aware of how much misinformation can be spread on the internet, but not how to decipher exactly what is misinformation because of the inadequate screening processes." (19- to 22-year-old

Youth expressed concerns about the values platforms prioritize. In particular, they emphasized how platforms are motivated by use and engagement, which negatively impacts whether a platform will moderate content (and, if so, how well).

  • "Keeping people on the platform is more important to them." (19- to 22-year-old)
  • "I think online platforms will always be more interested in the profit of my data than my actual humanity, so no to both [trust and safety].” (23- to 29-year-old)
  • "It’s against social media’s brand. They want people to scroll through as much as possible to keep them engaged. So, they won’t add extra steps." (23- to 29-year-old)

They also emphasized concerns about how much of the work of identifying and flagging or reporting misinformation and disinformation falls on users:

  • “It really depends on the users to report, and I feel like that can be the biggest problem with online platforms." (16- to 18-year-old)

It's a Vibe: Impacts of Platform Design on Combatting Misinformation and Disinformation

During the focus groups, participants spoke about how a platform's aesthetics, atmosphere, feeling, and values — what they generally referred to as the vibe of the platform — contributed to whether a platform provided meaningful opportunities for countering misinformation and disinformation. Participants emphasized that the vibe of these platforms encouraged passive engagement, whereas authenticating and verifying information involves active, critical engagement, which they explained felt contrary to the vibe – ultimately designed to pursue corporate business interests.

On the one hand, participants began to understand that users have an important role in combatting misinformation and disinformation, especially because they worried that platforms could ‘get it wrong.’ On the other hand, they saw a critical need for platforms to build more meaningful tools and processes into the user experience, or the vibe, to nudge users towards verification.


In the final part of the evaluation activity and during our closing discussions, participants offered insightful and valuable suggestions for change. Young Canadians shared recommendations specific to the platform they evaluated and general recommendations for countering misinformation and disinformation online, which we organized into four categories: (1) accessibility, awareness, and accuracy; (2) safety and responsibility; (3) trust; and (4) transparency.

For more information, please see the full recommendations available here.