The Basics of Verifying Information - Introduction
The misinformation landscape
The digital age presents us with unprecedented problems when it comes to finding information and making sure that it’s true. Where our first problem used to be getting information, what’s more difficult today is filtering out what we need from what we don’t. In fact, creating and distributing information is now so easy that we can no longer assume that sources have anything to lose by spreading content that’s false or misleading. In essence, today we all have to be our own librarians, researchers and fact-checkers.
“Telling people to be skeptical is only the starting point. The harder question is how to decide what to trust. If we can’t rely on the content itself, the trustworthiness of the source becomes much more important.”[2]
We can distinguish between three kinds of misinformation, to which we are vulnerable in different ways:[3]
- Falsehoods, in which both the claim and the evidence to support it are false. We are only likely to believe these if one of our shortcuts (particularly identity or surrogate thinking) support them.
- Truthy falsehoods, in which false evidence is used to support a claim that is or seems to be true. These can lead to rationalization, in which the goal of our reasoning is “bolstering the initial intuitive response.” This occurs “not because one fails to deliberate, but rather that they deliberate in a way that does not facilitate accuracy.”[4] Not surprisingly, then, we are particularly likely to agree with these when they support our partisan identity. Even when people learn that the evidence is false, they tend to view these as more acceptable than the other forms.[5] (This is why people often believe that parody news stories are true and may insist they are “basically true.”)
- Palters, in which true evidence is used to support a false claim. These can be the most difficult to recognize because they are often reasoned arguments based on facts that are, on their face, true, but are misleading or misrepresentative – using statistics that have been cherry=picked or lack important context, for example.
How big a problem is misinformation?
In 2024, about a third of Canadians (32 percent) said they saw information about new or current events they thought was false at least a few times a week. However, half as many (17 percent) said they’d seen information they believed was true and later learned was false at least a few times a week. (These figures hadn’t changed significantly in the previous five years.)[6]
False and misleading information can do significant harm when it is widely spread, when it is on an important topic such as health or politics and when it reaches people who are already inclined to believe it.[7]
While Canadians consider misinformation to be one of the most significant threats to our health and way of life,[8] and almost half say it is becoming more difficult to distinguish between true and false information,[9] just one in five meet the “information hygiene” goals of engaging with opinions they disagree with, regularly reading news from different sources and verifying information before sharing it.[10] Young people, in particular, turn heavily to social media for information and use unreliable cues, such as other users’ comments, to judge whether or not to believe what they see.[11] Young Canadians worry that “the abundance of misinformation and disinformation can overpower their limited access to free, well-sourced quality information … [and] lead to apathy or further ideological entrenchment for youth.”[12]
Misinformation can also be highly engaging: one 2024 study found that while just two percent of links shared by influential Canadian voices, these received 18 percent of the total engagement (shares, Likes, replies, etc.).[13] This engaging quality is both a cause and a consequence of recommendation algorithms, which generally favour content that will provoke strong reactions in order to keep users coming back.[14]
Content made with generative AI – including chatbots, fabricated news articles and websites,[15] impostor social media content, faked voice calls and deepfakes – are a major new development in the misinformation landscape.[16] That’s because they actually make text, images, audio or video in response to a user’s prompts, making it possible to mass-produce customized media works at effectively no cost.[17] As a result, platforms such as Facebook have been flooded by low-quality, AI-generated content.[18]
Studies have shown that both AI-generated misinformation[19] and “cyborg” content, created first by AI and then edited by a human, can be highly persuasive.[20] Unfortunately, “as generative AI becomes better and better, the days of looking for tell-tale signs to spot a fake are nearly over.”[21]
An increasing number of people use AI chatbots to get information and answer questions,[22] and search engines like Google have begun integrating “AI overviews” into search results. While these aren’t inherently deceptive like deepfakes, these are prone to errors and “hallucinations”[23]; some of the mistakes that have been observed come from the AI’s inability to recognize parody, while others draw on the prejudiced content and conspiracy theories common on the internet.
As of 2024, two in 10 Canadians are aware of having encountered a deepfake online in the past year.[24] While Canadians are generally skeptical of AI-generated content,[25] there is evidence that this skepticism will lead people to doubt real content,[26] as well as allowing bad actors to employ what’s called “the liar’s dividend” – the ability to claim that claim that anything that makes you look bad, or that you don’t want to believe, is a deepfake.
When a lot of people think about AI, they think, “Oh, it’s going to fool people into believing stuff that’s not true.” But what it’s really doing is giving people permission to not believe stuff that is true. Because they can say, “Oh, that’s an AI-generated image. AI can generate anything now: video, audio, the entire war zone re-created.” They will use it as an excuse … If you have people’s Spidey sense tingling all the time, they’ll just distrust everything.” Eliot Higgins, founder of fact-checking organization Bellingcat[27]
Why do people believe misinformation?
We’re more likely to accept misinformation that falls upon “fertile ground” by reinforcing our current identity and beliefs. Nazi propaganda, for example, influenced audiences more strongly in parts of Germany where antisemitism had been historically high and less so where it had been historically low,[28] while people who already have sexist views are more likely to accept conspiracy theories about feminism.[29]
Of course, we aren’t only influenced by things we see ourselves. Second-order beliefs – what we believe others believe – have a strong influence on what we think and are willing to say.[30] In a media environment, this can result in pluralistic ignorance, in which we falsely believe that fewer people share our opinions or beliefs than is actually true.[31] This may be particularly true online: seeing the same claim multiple times – even if it only a single original post being shared multiple times by other people – makes us more likely to believe it, a bias known as the illusory truth effect.[32]
Teaching authentication
We can break critical thinking down into a number of aspects:
- The motivation to be accurate;
- Specific skills such as reasoning, problem-solving and decision-making;
- And metacognition, or thinking and reflecting about our own thinking.[33]
All three of these, of course, support and interact with one another, but this division helps us understand the different ways we can teach critical thinking.
There are also a number of ways to make ourselves more resilient to misinformation:
- Fact-checking by experts;
- Implementing affordances and defaults such as accuracy nudges;
- Inoculation against both specific false or misleading claims and disinformation techniques, such as bad-faith arguments;
- Fostering identity management through techniques such as perspective-taking; and
- Improving our thinking, both in general and specifically with regards to online information.[34]
MediaSmarts’ research has found that search and authentication rank first among the digital literacy skills students want to learn,[35] and other studies have found students would like to start learning them as early as possible.[36] People with higher levels of digital media literacy are better at recognizing misinformation,[37] though not all programs are equal. Some, for instance, improve participants’ ability to recognize accurate information but not to identify misinformation,[38] or vice-versa. Unfortunately, many students report learning strategies that are unhelpful or counterproductive, such as not using Wikipedia or giving more weight to sites with dot.org addresses.[39]
Making people more aware of misinformation seems to make them more likely to try to verify what they see online,[40] but awareness is only the first step. Understanding emotional persuasion,[41] critical thinking, understanding the news and media industries and knowing how to search and how to verify information are essential skills to master if we want to end up with relevant and reliable information.[42] Finland, which is internationally recognized as being a leader in fighting mis- and disinformation, embeds it both within a broader context of digital media literacy and across its curriculum.[43]
Rather than focusing solely either on verification or debunking, the most effective programs teach participants the discernment to tell the difference between the two[44] (for instance, by including both true and false examples)[45] and also promote the idea of finding an answer that is “good enough” rather than expecting people to find an unambiguous truth.[46]
Programs should also include practical elements, such as an authentic opportunity to apply the knowledge and practice the skills they’ve learned immediately after the session.[47] Finally, it’s important the programs leave people feeling more confident about their ability to discern reliable from unreliable content online, rather than making them more alarmed about the issue.[48] Teenagers, in particular, respond well to framing programs as preparing them to act as a resource to less media-savvy members of their families.[49]
[1] (2022) Climate Change Remains Top Global Threat Across 19-Country Survey. Pew Research Center.
[2] Kapoor, S., & Narayanan A. (2023) How to Prepare for the Deluge of Generative AI on Social Media. Knight First Amendment Institute. https://knightcolumbia.org/content/how-to-prepare-for-the-deluge-of-generative-ai-on-social-media
[3] Langdon, J. A., Helgason, B. A., Qiu, J., & Effron, D. A. (2024). “It’s not literally true, but you get the gist:” how nuanced understandings of truth encourage people to condone and spread misinformation. Current Opinion in Psychology, 101788.
[4] Pennycook, G. (2023). A framework for understanding reasoning errors: From fake news to climate change and beyond. In Advances in experimental social psychology (Vol. 67, pp. 131-208). Academic Press.
[5] Langdon, J. A., Helgason, B. A., Qiu, J., & Effron, D. A. (2024). “It’s not literally true, but you get the gist:” how nuanced understandings of truth encourage people to condone and spread misinformation. Current Opinion in Psychology, 101788.
[6] Lockhart, A., Laghaei, M & Andrey S. (2024) Survey of Online Harms in Canada 2024. The Dais.
[7] Tay, L.Q., Lewandowsky, S., Hurlstone, M.J. et al. Thinking clearly about misinformation. Commun Psychol 2, 4 (2024). https://doi.org/10.1038/s44271-023-00054-5
[8] McCarten, J. (2022) Bogus information, online climate change top Canadian fears in latest Pew survey. Associated Press
[9] Statistics Canada. (2023) Concerns with misinformation online. The Daily.
[10] Heer., T., Heath C., Girling K., Bugg E. (2021) Misinformation in Canada: Research and Policy Options. Evidence for Democracy.
[11] Hassoun, A., Beacock, I., Consolvo, S., Goldberg, B., Kelley, P. G., & Russell, D. M. (2023, April). Practicing Information Sensibility: How Gen Z Engages with Online Information. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-17).
[12] Canadian Youth Assembly on Digital Rights and Safety. (2023) “Canadian Youth Assembly on Digital Rights and Safety: Recommendations to promote the safety, well-being and flourishing of Canadian youth online.” Montreal, Centre for Media, Technology and Democracy.
[13] (2024) Canadian Information Ecosystem Situation Report. Canadian Digital Media Research Network.
[14] Milli, S., Belli, L., & Hardt, M. (2021, March). From optimizing engagement to measuring value. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 714-722).
[15] Knibbs, K. (2025) How a Small Iowa Newspapers Website Became an AI Generated Clickbait Factory. Wired.
[16] Papadopoulos, S., Papadopoulou, O., Alaphilippe, A. & Giglietto F. (2024) Generative AI and Disinformation Report. Information Technologies Institute.
[17] Woodbury, R. (2024) Weapons of Mass Production. Digital Native.
[18] DiResta, R., & Goldstein, J. A. (2024). How Spammers and Scammers Leverage AI-Generated Images on Facebook for Audience Growth. arXiv preprint arXiv:2403.12838.
[19] Spitale, G., Biller-Andorno, N., & Germani, F. (2023). AI model GPT-3 (dis) informs us better than humans. Science Advances, 9(26), eadh1850.
[20] Goldstein, J. A., Chao, J., Grossman, S., Stamos, A., & Tomz, M. (2024). How persuasive is AI-generated propaganda?. PNAS nexus, 3(2), page034.
[21] Hern, A. (2024) ‘Time is running out’: can a future of undetectable deepfakes be avoided? The Guardian.
[22] Barhsay, J. (2024) Teens are looking to AI for information and answers, two surveys show. The Hechinger Report
[23] Angwin, J., Nelson A. & Palta R. (2024) Seeking Reliable Election Information? Don’t Trust AI. Proof News.
[24] The Strategic Counsel. (2024) Trends in Internet Use and Attitudes: Findings from a Survey of Canadian Internet Users. CIRA.
[25] O’Neill, N. (2023) When it comes to AI search results, Canadians are skeptical: survey. CTV News.
[26] Ternovski, J., Kalla, J., & Aronow, P. (2022). The negative consequences of informing voters about deepfakes: evidence from two survey experiments. Journal of Online Trust and Safety, 1(2).
[27] Quoted in Subramanian, S. (2024) How to Lead an Army of Digital Sleuths in the Age of AI. Wired.
[28] Adena, M., Enikolopov, R., Petrova, M., Santarosa, V., & Zhuravskaya, E. (2015). Radio and the Rise of the Nazis in Prewar Germany. The Quarterly Journal of Economics, 130(4), 1885-1939.
[29] Jolley, D., Mari, S., Schrader, T., & Cookson, D. (2024). Sexism and feminist conspiracy beliefs: hostile sexism moderates the link between feminist conspiracy beliefs and rape myth acceptance. Violence against women, 10778012241234892.
[30] Mildenberger, M., & Tingley, D. (2019). Beliefs about climate beliefs: the importance of second-order opinions for climate politics. British Journal of Political Science, 49(4), 1279-1307.
[31] Mildenberger, M., & Tingley, D. (2019). Beliefs about climate beliefs: the importance of second-order opinions for climate politics. British Journal of Political Science, 49(4), 1279-1307.
[32] Udry, J., & Barber, S. J. (2023). The illusory truth effect: A review of how repetition increases belief in misinformation. Current Opinion in Psychology, 101736.
[33] Saiz, C., & Rivas, S. F. (2011). Evaluation of the ARDESOS programs: An initiative to improve critical thinking skills. Journal of the Scholarship of Teaching and Learning, 34-51
[34] Ziemer, C. T., & Rothmund, T. (2024). Psychological underpinnings of misinformation countermeasures: A systematic scoping review. Journal of Media Psychology: Theories, Methods, and Applications.
[35] MediaSmarts. (2023). “Young Canadians in a Wireless World, Phase IV: Digital Media Literacy and Digital Citizenship.” MediaSmarts. Ottawa.
[36] Besharat-Mann, R. (2024). Can I trust this information? Using adolescent narratives to uncover online information seeking processes. Journal of Media Literacy Education, 16(1), 1-18.
[37] Steinfeld, N. (2023). How Do Users Examine Online Messages to Determine If They Are Credible? An Eye-Tracking Study of Digital Literacy, Visual Attention to Metadata, and Success in Misinformation Identification. Social Media+ Society, 9(3), 20563051231196871.
[38] Hoes, E. (2024). The Effect of a Real-World, Long-Term Media Literacy Intervention: A Difference-in-Differences Approach.
[39] Besharat-Mann, R. (2024). Can I trust this information? Using adolescent narratives to uncover online information seeking processes. Journal of Media Literacy Education, 16(1), 1-18.
[40] Statistics Canada. (2023) Concerns with misinformation online. The Daily.
[41] Scheibenzuber, C. (2023). Media literacy education against fake news (Doctoral dissertation, lmu).
[42] Edwards, L., Stoilova, M., Anstead, N., Fry, A., El-Halaby, G., & Smith, M. (2021). Rapid evidence assessment on online misinformation and media literacy: Final report for OFCOM.
[43] Benke, E., & Spring M. (2022) US midterm elections: Does Finland have the answer to fake news? BBC News.
[44] Guay, B., Berinsky, A. J., Pennycook, G., & Rand, D. (2023). How to think about whether misinformation interventions work. Nature Human Behaviour, 7(8), 1231-1233.
[45] Altay, S., De Angelis, A., & Hoes, E. (2023). Beyond Skepticism: Framing Media Literacy Tips to Promote Reliable Information
[46] Hassoun, A., Beacock, I., Consolvo, S., Goldberg, B., Kelley, P. G., & Russell, D. M. (2023, April). Practicing Information Sensibility: How Gen Z Engages with Online Information. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-17).
[47] Capewell, G., Maertens, R., Remshard, M., van der Linden, S., Compton, J., Lewandowsky, S., & Roozenbeek, J. (2023). Misinformation interventions decay rapidly without an immediate posttest. Journal of Applied Social Psychology.
[48] Bateman, J., & Jackson D. (2024) Countering Disinformation Effectively: An Evidence-Based Policy Guide. Carnegie Endowment for International Peace.
[49] Orosz, G., Faragó, L., Paskuj, B., Rakovics, Z., Sam-Mine, D., Audemard, G., ... & Krekó, P. (2024). Softly empowering a prosocial expert in the family: lasting effects of a counter-misinformation intervention in an informational autocracy. Scientific Reports, 14(1), 11763.