Reporting Platforms: Young Canadians Evaluate Efforts to Counter Disinformation
MediaSmarts facilitated online focus groups with youth ages 16 to 29 to examine and assess reporting processes on popular apps (Instagram, TikTok, Twitter, and YouTube). More specifically, we wanted to understand how young Canadians feel about current efforts to counter misinformation and disinformation online and what solutions they have regarding the problems and concerns they experience while navigating online information ecosystems and communities.
Overall, youth expressed a lack of trust and confidence in the ability of platforms to counter misinformation and disinformation and to keep them informed and safe online. Participants emphasized that users are not provided with meaningful opportunities to engage with platforms to prevent and address misinformation and disinformation, and they felt platforms must make design changes that prioritize values of accuracy, transparency, trust, responsibility, and safety.
For more information:
KEY FINDINGS AND RECOMMENDATIONS:
This project, funded by the Department of Canadian Heritage, continues MediaSmarts' tradition of youth-engaged research by positioning young Canadians as experts to be actively involved in assessing current approaches and designing new policies, interventions, and tools to mitigate the spread of misinformation and disinformation in online spaces.
Reporting and Evaluating Platforms – What We Heard:
Participants highlighted the frequency of encountering misinformation and disinformation on various social media apps and other platforms. Most indicated they had encountered misleading or false information online within a couple of weeks of the focus group sessions.
- (16- to 18-year-old)
- (16- to 18-year-old)
- (23- to 29-year-old)
When we asked what participants do when they see misinformation or disinformation, they shared several strategies. For example, some youth indicated that they tend to ignore the posts or content – either because it takes up too much time and energy to report or because they do not want to draw more attention to the post or the creator who shared it:
- (16- to 18-year-old)
- (23- to 29-year-old)
Other participants said they sometimes attempt to verify the information because they want to know for sure whether it is 'propaganda' or 'fake':
- (16- to 18-year-old)
- (23- to 29-year-old)
Some youth indicated a preference for a more relational approach to verifying posts. Older participants said they are more likely to engage with misleading or false posts if it originates from someone close to them:
- (23- to 29-year-old)
- (23- to 29-year-old)
- (23- to 29-year-old)
A few participants said that they report misinformation and disinformation, especially when it includes inappropriate content or scams, and while they see value in doing so, they are uncertain about the impact. Most explained that it is important for platforms to take on this role of moderating or flagging content because of the potential harmful impacts of misleading and false information and the need to ‘protect users.’
- (16- to 18-year-old)
- (16- to 18-year-old)
- (23- to 29-year-old)
Finally, while youth expressed a desire to see platforms play a more prominent role in countering misinformation and disinformation, they were doubtful that platforms would do so, and some participants questioned how well platforms were flagging information and removing the right content. In response, some emphasized wanting to have ‘actual people’ monitoring, moderating, and flagging content on social media platforms.
- (23- to 29-year-old)
- (16- to 18-year-old)
- (19- to 22-year-old)
During the evaluation activity, we asked youth if they were familiar with current reporting tools and processes on four platforms: Instagram, TikTok, Twitter, and YouTube. Responses varied across platforms and age groups. While some participants were aware of platform tools for flagging misinformation and disinformation, others were not.
We also asked whether they think existing tools clearly define misinformation and disinformation, if the reporting process is easy to navigate, and whether the approach provides users with a meaningful process for combatting misinformation and disinformation on the platform. Opinions were split here as well, and varied by platform. For example, participants who evaluated TikTok agreed that the platform gives users clear definitions of misinformation and disinformation; however, those who evaluated Instagram did not feel the same. Many participants who evaluated YouTube said that the reporting process is easy enough to navigate but did not think that it is a meaningful way to combat false or misleading content on this platform. While all platforms inform users about what will happen after they report content (albeit to varying degrees and with varying detail), participants usually wanted more information about how reports are assessed and how platforms make decisions.
At the end of each focus group session, we asked participants to reflect on whether they think online platforms are doing enough to counter misinformation and disinformation and if they trust platforms to keep them informed and safe online. The consensus was ‘no.’
- (16- to 18-year-old)
- (19- to 22-year-old)
- (19- to 22-year-old
Youth expressed concerns about the values platforms prioritize. In particular, they emphasized how platforms are motivated by use and engagement, which negatively impacts whether a platform will moderate content (and, if so, how well).
- (19- to 22-year-old)
- (23- to 29-year-old)
- (23- to 29-year-old)
They also emphasized concerns about how much of the work of identifying and flagging or reporting misinformation and disinformation falls on users:
- (16- to 18-year-old)
It's a Vibe: Impacts of Platform Design on Combatting Misinformation and Disinformation
During the focus groups, participants spoke about how a platform's aesthetics, atmosphere, feeling, and values — what they generally referred to as the vibe of the platform — contributed to whether a platform provided meaningful opportunities for countering misinformation and disinformation. Participants emphasized that the vibe of these platforms encouraged passive engagement, whereas authenticating and verifying information involves active, critical engagement, which they explained felt contrary to the vibe – ultimately designed to pursue corporate business interests.
On the one hand, participants began to understand that users have an important role in combatting misinformation and disinformation, especially because they worried that platforms could ‘get it wrong.’ On the other hand, they saw a critical need for platforms to build more meaningful tools and processes into the user experience, or the vibe, to nudge users towards verification.
Recommendations:
In the final part of the evaluation activity and during our closing discussions, participants offered insightful and valuable suggestions for change. Young Canadians shared recommendations specific to the platform they evaluated and general recommendations for countering misinformation and disinformation online, which we organized into four categories: (1) accessibility, awareness, and accuracy; (2) safety and responsibility; (3) trust; and (4) transparency.
For more information, please see the full recommendations available here.