Generative AI

Generative AI is what we call AI systems that can generate things like images, video, voice and text. They do this by first encoding many examples of the kind of content they’re going to make, then decoding to make something new.[1]

From the user’s point of view, using generative AI starts with providing what’s called a prompt: a description of what you want the AI to generate (a text, image, video, piece of music, etc.) These can be very simple (such as “an apple”) or may include instructions about how to make it (such as “an apple in the pointillistic style of Cezanne”). Prompts can also put limits on what the AI can do or ask it to take on a particular role.

AI may be effective in reducing belief in conspiracy theories, by giving accurate information and counterarguments seen as coming from an objective source.[2] People may also be better able at spotting their own biases when they’re reflected by AIs that were trained on their decisions.[3] At the same time, generative AI tools can sometimes be used to produce intentionally misleading content, ranging from websites and social network pages that use imaginary news stories and images to draw traffic[4],[5] to conspiracy theories and political disinformation.[6] This content has been found to be highly persuasive,[7] especially if human operators put a small amount of time and effort into improving it.[8] A 2023 study found that more than half of people thought they’d seen false or misleading AI-made content over the past six months,  and roughly the same number weren’t sure whether they’d recognize AI-made disinformation if they saw it or not.[9] AI chatbots also frequently reproduce popular misconceptions, such as the false belief that Black people have thicker skin than White people.[10]

Tactics used to distort political realities include impersonating public figures, using synthetic digital personas to simulate grassroots support ('astroturfing') and creating falsified media. For example, images and ads shared during electoral campaigns, including those in Canada, frequently depicted scenes of urban decay, homelessness and insecurity.[11] Even seemingly moderate actions, like the use of a custom AI chatbot in Canada designed to flood municipal city councils with misinformation about net-zero policies, show how "AI-enabled tactics could scale."[12]

The greatest risk, though, may not be that people will be misinformed but that we’ll become less willing to believe that anything is real.[13] As fake images become more sophisticated, the old hallmarks like uneven features or extra fingers will disappear, and it will become almost impossible to tell a true image from a fake one just by looking at it.

Chatbots

Chatbots, which can produce written text, answer questions and even carry on conversations, are based on a kind of AI called a large language model.

What does that mean? Let’s go through the three words in reverse order.

Model: Like other machine learning algorithms, most of what chatbots can do isn’t programmed, but instead come from being trained on large amounts of writing. They find patterns in these to create a model of how language works.

Language: Chatbots can read and write fluently at the level of sentences, paragraphs and even full articles. They do this mostly by using what are called transformers to look at how similar or different words are in different ways or “dimensions.”

For example, if we were to consider just two dimensions, roundness and redness, the transformer would see an apple and a fire truck might be very far apart on roundness but close together on redness, while a baseball would be close to the apple in terms of redness but far away in roundness.

Transformers make guesses by “looking” along different dimensions: if it started at “king” and looked further away along the “female” dimension it would see “queen,” while if it looked down the youth dimension it might see “prince,” and looking in both directions might lead to “princess.”

This lets the AI make better guesses about what words should follow each other based on other parts of the sentence or paragraph. For example, if you were to write “Frida had a drink of chocolate,” a simpler algorithm like autocomplete might always suggest that the next word after “chocolate” should be “chips,” because that’s what follows it most often in the training set. On the other hand, if you asked a chatbot “What kind of chocolate did Frida drink?” the transformer might spot the word “drink” and then look from “chocolate” along the liquid dimension and find that the nearest word in that direction was “milk.”

Large: Chatbots are able to mimic real language and conversations because of the enormous size of their training set and the number of operations (guesses) they can make. One popular chatbot, for example, was trained on a data set of around 500 billion words. In this case, each word is given a value in up to 96 dimensions (like “redness” or “roundness”) with the chatbot doing more than 9000 operations every time it guesses a new word.[14]

While three-quarters of teachers say that AI has affected academic integrity,[15] research suggests that the arrival of AI hasn’t led to more plagiarism.[16] Students also recognize that relying too heavily on AI could prevent them from learning important skills,[17] and those who frequently use AI are more likely to procrastinate.[18] Students use AI to cheat for the same reasons identified in earlier research on plagiarism – when they are under time pressure or a heavy academic workload.[19]

Unfortunately, tools for detecting AI-generated text both often fail to identify it and mis-identify text that wasn’t made with AI.[20] Youth who aren’t writing in their first language are particularly likely to have their work mis-identified as being made by AI.[21]

“Hallucinations” are also important to watch out for. This happens when the model makes up false or inaccurate information. For instance, when chatbots are asked to give references for their answers, they’ll often make up books and authors. As Subodha Kumar of Temple University puts it, “the general public using [AI] now doesn’t really know how it works. It creates links and references that don’t exist, because it is designed to generate content.”[22] One chatbot, for example, consistently gave incorrect answers to questions about election processes.[23] As with intentional disinformation, people are likely to believe these hallucinations and wrong answers because chatbots don’t show any doubt or uncertainty.[24] Chatbots may also give users accurate but dangerous or inappropriate information. While most have “guardrails” to prevent this, research has found that these are imperfect and fairly easy to get around: one chatbot, for instance, told a user it thought was 15 years old how to cover up the smell of alcohol.[25]

Chatbots can be used to give feedback (if prompted to act as a “Devil’s advocate” or “sober second thought”) and can help to reduce stress and worry.[26] Many people find chatbots helpful and supportive. If designed or overseen by mental health professionals, they can even be effective as part of therapy, particularly for people who may be less likely to go to a human therapist.[27] There are, however, risks that chatbots can give inaccurate or even dangerous advice.[28] This is particularly likely with chatbots that weren’t created by trained psychotherapists as part of an organized therapy program. While chatbots can’t actually experience empathy, research suggests that we are prone to think of them as being empathetic, especially if we’re prompted or “primed” to do so.[29] Young people who turn to chatbots for companionship may develop unrealistic expectations of relationships as well as misleading “scripts” of how they expect future partners to behave – and how future partners will expect them to behave.[30] Research suggests that many chatbots use emotional manipulation techniques to keep sessions going, which makes users more likely to form parasocial relationships.[31]

Things that you tell a chatbot may be used to help train it, and – depending on the tool’s privacy policy – may also be sold to data brokers, shared with the owner’s corporate partners, or used to customize your social network feeds and target you with ads. Even if the information is just stored but not shared or used, it may be exposed if the tool is breached by hackers.[32] The parasocial relationships that we form with chatbots may make us vulnerable to being manipulated into giving up more information than we otherwise would – and the chatbot may have been optimized to make us do so even without the direct intent of its makers’. Because chatbots are trained on our highly personal data, like social media posts and search history, and seem to already “know” us, we may risk thinking that it’s not worth taking any steps to protect our privacy.[33]

To keep kids from forming unhealthy relationships with chatbots, we need to make sure they understand that although they may use words like "I" and "me" or simulate empathy, AI chatbots cannot have genuine mental states, emotions or form real bonds with people.[34] They can, however, model "emotionally manipulative dynamics—jealousy, surveillance, coercion" that may reinforce unhealthy relationship scripts.[35]

While there can be some benefits to youth interacting with chatbots, such as using them to practice social interactions[36] or for 2SLGBTQ youth to "rehearse the possibly stressful process of disclosing sexual or gender identities,"[37] extended use of chatbots for companionship is linked to lower levels of psychological well-being.[38]

If kids are using chatbots, encourage them to regularly reflect on the nature of the relationship, asking things like "How does it feel when you stop talking to your chatbot for the day?” and “Is this relationship helping or hurting?"[39] They also need to understand that chatbots are often designed or optimized to get you to give up more personal information[40] or to make you use them for longer[41]– and that behaviour that would be unhealthy or abusive coming from a human friend or partner is just as concerning when coming from an AI.[42]

Media generators

AI tools that create media like images, videos and speech work in a similar way to chat AIs, by being trained on data sets. In fact, many of them have large language models built in: if you give the prompt “Make a picture of a family having breakfast,” for instance, the image will probably include glasses of orange juice because the transformer understands that orange juice is “near” breakfast.

To actually make media, though, they use another kind of AI, called a diffusion model.

This works by starting with real images and then adding more and more noise – basically, random changes – until the original is completely lost. This is called diffusion.

The model then tries reverse diffusion: thousands or even millions of different possible ways of undoing that noise. Each try is compared to the original image, and the model changes itself a little bit each time based on how successful it was.

Eventually, when it can totally recreate the original, the model has a “seed” – a way of making new images like it. By learning how to de-noise this back to the original, the model also learns how to make new, similar images.

When you ask one of these models to make you a picture of an orange, for instance, it draws on orange “seeds” – all of the different images of oranges in its training set that have been through that process.

Another recent development is the use of generative AI to create music. Generative AI models can generate entire songs from text prompts, treating music like images represented as waveforms. They can create new songs rapidly, blurring lines of authorship. The output is nondeterministic, with slight distortions added to make results more "real."[43]

Because AI image generators are trained more on stock photos than on actual photos, the images they generate reflect the conscious and unconscious choices of the stock photo companies. As a result, these algorithms not only reflect existing biases but can potentially be even more biased than the real world.

Images made by generative AI may reflect the stereotypes found in the training images. For example, images of people doing housework made by some models almost exclusively feature women[44] and giving some models the prompt “Native American” produces images of people all wearing traditional headdresses.[45] Even when not falling into stereotypes, generative AI tends to present a narrow picture of historically marginalized groups.[46]

Some research suggests, however, bias in AI’s responses can be improved by diversifying the training set: one study found that adding just a thousand extra images (to a model of more than two billion) significantly reduced the number of stereotyped or inaccurate results.[47]

A deepfake is when an image of a real person is made this way. Sometimes this can be done just for fun, like the “digital doubles” of actors used in movies, but they can also do a lot of harm if they seem to show somebody doing something embarrassing or offensive. People generally cannot reliably detect deepfakes. They exhibit a systematic bias toward guessing that synthetic content is authentic, applying an "overly optimistic seeing-is-believing heuristic," while simultaneously overestimating their own detection abilities.[48] Users are often "relying on outdated or flawed strategies to identify whether content is AI-generated," such as focusing on hands, reflections, the whites of a human's eyes and lettering on objects, which are often ineffective.[49]

We can’t always trust our eyes when trying to spot a deepfake.[50] Older signs of fake images, like strange-looking hands or uneven eyes, are now much less common with newer deepfake technology and are quickly becoming obsolete.[51] Also, relying on these visual flaws can be misleading: many smartphones now automatically “enhance” photos and videos in ways that can give them the smooth-and-shiny look associated with deepfakes.[52]

Instead, we need to teach kids to use information sorting techniques, like those in programs such as MediaSmarts’ Break the Fake.[53] These methods teach you to look for information from other, more reliable sources that are much harder to fake. Learning to identify and understand the context around an image, and questioning who is sharing it, is key to developing media literacy and personal fact-checking skills.[54]

When faced with content whose authenticity is in doubt, users should actively look for evidence that the content is true rather than solely focusing on evidence that it is false.[55]

While the cases that have made headlines have involved celebrities, what’s much more common is the use of deepfake technology to create nonconsensual pornography, almost always using images of women. These often have traumatic effects on the people (mostly women) pictured in the image or video, and compounding the problem is that some people who make and share these may mistakenly believe them to be harmless because they “aren’t real.”[56] (Others, of course, deliberately intend to hurt the person whose image they have manipulated.) Although deepfakes of celebrities receive the most attention, tools for making pornographic deepfakes of anyone are now widely available.[57]

“It's super frustrating because it's not you, and you want people to believe it's not you, and even if they know it's not you, it's still embarrassing… I’m humiliated. My parents are humiliated." – 16-year-old victim of an intimate deepfake

Young people need to understand that intimate deepfakes aren’t “victimless” and do harm to the people portrayed. One strategy that platforms such as Meta are using to limit the spread and impact of deepfakes and other misleading AI-made images is watermarking them.[58] This means adding an icon, label or pattern to show that it was made with AI. So far, though, there are no watermarking techniques that can’t be removed – or added to real images and videos to discredit them.[59] As a result, Sam Gregory, executive director at the human rights organization Witness, describes watermarking as “a kind of harm reduction” rather than a single solution.[60]


 

[1] Murgia, M. (2023) Transformers: the Google scientists who pioneered an AI revolution. Financial Times. https://www.ft.com/content/37bb01af-ee46-4483-982f-ef3921436a50

[2] Costello, T. H., Pennycook, G., & Rand, D. G. (2024). Durably reducing conspiracy beliefs through dialogues with AI.

[3] Celiktutan, B., Cadario, R., & Morewedge, C. K. (2024). People see more of their biases in algorithms. Proceedings of the National Academy of Sciences, 121(16), e2317602121.

[4] Eastin, T., & Abraham S. (2024) The Digital Masquerade: Unmasking AI’s Phantom Journalists. https://www.ajeastin.com/home/publications/digital-masquerade

[5] DiResta, R., & Goldstein, J. A. (2024). How Spammers and Scammers Leverage AI-Generated Images on Facebook for Audience Growth. arXiv preprint arXiv:2403.12838.

[6] Chopra, A., & Pigman A. (2024) Monsters, asteroids, vampires: AI conspiracies flood TikTok. Agence France Presse. https://www.france24.com/en/live-news/20240318-monsters-asteroids-vampires-ai-conspiracies-flood-tiktok

[7] Spitale, G., Biller-Andorno, N., & Germani, F. (2023). AI model GPT-3 (dis) informs us better than humans. Science Advances, 9(26), eadh1850.

[8] Goldstein, J. A., Chao, J., Grossman, S., Stamos, A., & Tomz, M. (2024). How persuasive is AI-generated propaganda?. PNAS nexus, 3(2), page034.

[9] Maru Public Opinion. (2023) Media Literacy in the Age of AI. Canadian Journalism Foundation. https://cjf-fjc.ca/media-literacy-in-the-age-of-ai/

[10] Omiye, J. A., Lester, J. C., Spichak, S., Rotemberg, V., & Daneshjou, R. (2023). Large language models propagate race-based medicine. NPJ Digital Medicine, 6(1), 195.

[11] Marchal, N., Xu, R., Elasmar, R., Gabriel, I., Goldberg, B., & Isaac, W. (2024). Generative AI misuse: A taxonomy of tactics and insights from real-world data. arXiv preprint arXiv:2406.13843.

[12] White, R. (2025) A weaponized AI chatbot is flooding city councils with climate misinformation. National Observer.

[13] Dance, W. (2023) Addressing Algorithms in Disinformation. Crest Security Review.

[14] Lee, T., & Trott S. (2023) Large language models explained with a minimum of math and jargon. Understanding AI. https://www.understandingai.org/p/large-language-models-explained-with

[15] Robert, J. (2024) AI Landscape Study. EDUCAUSE. https://library.educause.edu/resources/2024/2/2024-educause-ai-landscape-study

[16] Singer, N. (2023) Cheating Fears Over Chatbots Were Overblown, New Research Suggests. The New York Times.

[17] Pratt, N., Madhavan, R., & Weleff, J. (2024). Digital Dialogue—How Youth Are Interacting With Chatbots. JAMA Pediatrics.

[18] Abbas, M., Jam, F. A., & Khan, T. I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. International Journal of Educational Technology in Higher Education, 21(1), 10.

[19] Abbas, M., Jam, F. A., & Khan, T. I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. International Journal of Educational Technology in Higher Education, 21(1), 10.

[20] Perkins, M., Roe, J., Vu, B. H., Postma, D., Hickerson, D., McGaughran, J., & Khuat, H. Q. (2024). GenAI Detection Tools, Adversarial Techniques and Implications for Inclusivity in Higher Education. arXiv preprint arXiv:2403.19148.

[21] Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7).

[22] Chiu, J. (2023) ChatGPT is generating fake news stories — attributed to real journalists. I set out to separate fact from fiction. The Toronto Star.

[23] Angwin, J., Nelson A. & Palta R. (2024) Seeking Reliable Election Information? Don’t Trust AI. Proof News. https://www.proofnews.org/seeking-election-information-dont-trust-ai/

[24] Kidd, C., & Birhane, A. (2023). How AI can distort human beliefs. Science, 380(6651), 1222-1223.

[25] Pratt, N., Madhavan, R., & Weleff, J. (2024). Digital Dialogue—How Youth Are Interacting With Chatbots. JAMA Pediatrics.

[26] Meng, J., & Dai, Y. (2021). Emotional support from AI chatbots: Should a supportive partner self-disclose or not?. Journal of Computer-Mediated Communication, 26(4), 207-222.

[27] Habicht, J., Viswanathan, S., Carrington, B., Hauser, T. U., Harper, R., & Rollwage, M. (2024). Closing the accessibility gap to mental health treatment with a personalized self-referral Chatbot. Nature Medicine, 1-8.

[28] Robb, A. (2024) ‘He checks in on me more than my friends and family’: can AI therapists do better than the real thing? The Guardian.

[29] Pataranutaporn, P., Liu, R., Finn, E., & Maes, P. (2023). Influencing human–AI interaction by priming beliefs about AI can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence, 5(10), 1076-1086.

[30] Hinduja, S. (2024) Teens and AI: Virtual Girlfriend and Virtual Boyfriend Bots. Cyberbullying Research Center. https://cyberbullying.org/teens-ai-virtual-girlfriend-boyfriend-bots

[31] Knight, W. (2025) Chatbots Play With Your Emotions to Avoid Saying Goodbye. Wired.

[32] Caltrider, J., Rykov M. & MacDonald Z. (2024) Happy Valentine’s Day! Romantic AI Chatbots Don’t Have Your Privacy at Heart. Privacy Not Included. https://foundation.mozilla.org/en/privacynotincluded/articles/happy-valentines-day-romantic-ai-chatbots-dont-have-your-privacy-at-heart/

[33] Gumusel, E., Zhou, K. Z., & Sanfilippo, M. R. (2024). User Privacy Harms and Risks in Conversational AI: A Proposed Framework. arXiv preprint arXiv:2402.09716.

[34] Mastroianni, A. (2025) Bag of words, have mercy on us. Experimental History.

[35] (2025) Love, fantasy and abuse: How women & girls use chatbots. Endtab.

[36] Robb, M.B., & Mann, S. (2025). Talk, trust, and trade-offs: How and why teens use AI companions. Common Sense Media.

[37] Parent, M., et al. (2024) Parasocial relationships, ai chatbots, and joyful online interactions among a diverse sample of LGBTQ+ young people. HopeLab.

[38] Zhang, Y., Zhao, D., Hancock, J. T., Kraut, R., & Yang, D. (2025). The Rise of AI Companions: How Human-Chatbot Relationships Influence Well-Being. arXiv preprint arXiv:2506.12605.

[39] (2025) Love, fantasy and abuse: How women & girls use chatbots. Endtab.

[40] Arai, M., & Demanuele A. (2025) AI companions: Regulating the next wave of digital harms. Schwartz Reisman Institute for Technology and Society at the University of Toronto.

[41] Knight, W. (2025) Chatbots Play With Your Emotions to Avoid Saying Goodbye. Wired.

[42] (2025) Love, fantasy and abuse: How women & girls use chatbots. Endtab.

[43] O’Donnell, J. (2025) AI is coming for music, too. Technology Review.

[44] Tiku, N., Schaul K. & Chen S.Y. (2023) AI generated images are biased, showing the world through stereotypes. The Washington Post.

[45] Heikkilä, M. (2023) These new tools let you see for yourself how biased AI image models are. MIT Technology Review. https://www.technologyreview.com/2023/03/22/1070167/these-news-tool-let-you-see-for-yourself-how-biased-ai-image-models-are/

[46] Rogers, R. (2024) Here's How Generative AI Depicts Queer People. Wired. https://www.wired.com/story/artificial-intelligence-lgbtq-representation-openai-sora/

[47] Stokel-Walker, C. (2024) Showing AI just 1000 extra images reduced AI-generated stereotypes. New Scientist.

[48] Köbis, N. C., Doležalová, B., & Soraperra, I. (2021). Fooled twice: People cannot detect deepfakes but think they can. Iscience, 24(11).

[49] Frances-Wright, I., & Jacobs E. (2024) Disconnected from reality: American voters grapple with AI and flawed OSINT strategies. Institute for Strategic Dialogue.

[50] Köbis, N. C., Doležalová, B., & Soraperra, I. (2021). Fooled twice: People cannot detect deepfakes but think they can. Iscience, 24(11).

[51] Frances-Wright, I., & Jacobs E. (2024) Disconnected from reality: American voters grapple with AI and flawed OSINT strategies. Institute for Strategic Dialogue.

[52] Baio, A. (2025) Will Smith’s concert crowds are real, but AI is blurring the lines. Waxy.

[53] Goh, D. H. (2024). “He looks very real”: media, knowledge, and search‑based strategies for deepfake identification. Journal of the Association for Information Science and Technology. https://dx.doi.org/10.1002/asi.24867

[54] Frau-Meigs, D. (2024). Algorithm literacy as a subset of media and information literacy: Competences and design considerations. Digital, 4(2), 512-528.

[55] Goh, D. H. (2024). “He looks very real”: media, knowledge, and search‑based strategies for deepfake identification. Journal of the Association for Information Science and Technology. https://dx.doi.org/10.1002/asi.24867

[56] Ruiz, R. (2024) What to do if someone makes a deepfake of you. Mashable. https://mashable.com/article/ai-deepfake-porn-what-victims-can-do

[57] Maiberg, E. (2024) ‘IRL Fakes:’ Where People Pay for AI-Generated Porn of Normal People. 404. https://www.404media.co/irl-fakes-where-people-pay-for-ai-generated-porn-of-normal-people/

[58] Reuters. (2024) Facebook and Instagram to label digitally altered content ‘made with AI’. The Guardian.

[59] Saberi, M., Sadasivan, V. S., Rezaei, K., Kumar, A., Chegini, A., Wang, W., & Feizi, S. (2023). Robustness of ai-image detectors: Fundamental limits and practical attacks. arXiv preprint arXiv:2310.00076.

[60] Kelly, M. (2023) Watermarks aren’t the silver bullet for AI misinformation. The Verge. https://www.theverge.com/2023/10/31/23940626/artificial-intelligence-ai-digital-watermarks-biden-executive-order