Understanding AI and helping youth make the most of it 

By Melissa Racine, Media Education Specialist, and Tricia Grant, Director of Marketing and Communications at MediaSmarts 

Everywhere we turn, we’re hearing about artificial intelligence (AI). We already know AI is all around us – algorithms are suggesting what to watch and tools like ChatGPT and Midjourney are being used to generate the content we’re seeing.

But how many of us actually understand what algorithms even are? And if you’re a parent, guardian or teacher, are you prepared to teach youth how to use AI responsibly?

What do we mean when we talk about AI at a basic level, and what does it have to do with digital media literacy? 

Let’s start by breaking down a few key terms:

Image
Image showing lines of computer code

Algorithms: In computer science terms, algorithms are like sets of instructions for a computer (or app or website). They tell the computer what steps to follow to complete a task. Machine learning algorithms, including those that power AI tools, are not designed by programmers but “trained” on data so that they find patterns and develop their own strategies to achieve whatever purpose they were made for.

Examples of machine learning algorithms include the recommendation algorithms that drive what you see on Netflix, the search results generated by Google, or the filters that decide what you see in your social media feeds. 

AI: There’s no clear point when machine learning algorithms become AI, or artificial intelligence, but some common traits are when they seem complicated or creative enough that they do things we think of only humans as being able to do, and when they can learn and make data-driven decisions independently. 

Generative AI: Generative AI is a type of AI that allows a computer to create its own content, such as text or images, based on patterns it has learned. For example, ChatGPT can generate responses to text inputs, mimicking a conversation, while image creators like Midjourney or Dall-E can make pictures. Other generative AI models that are currently available or in development can create human-sounding voices or realistic videos. 

It’s important to learn how these AI technologies work so that we can feel empowered to use them responsibly and help kids use them too. Once we know how AI works, we can also recognize the biases and limitations that can be present and are better able to navigate digital spaces with awareness and critical thinking. 

While AI can be useful, it's essential to use it responsibly. Just like any tool, it can be used for both positive and negative purposes. As parents or teachers, it's important to guide children in using AI safely and ethically, making sure they understand its potential benefits and risks. 

What are some basic things we should know about how algorithms work? 

As mentioned above, algorithms are the programming instructions that guide a computer or program’s operations. So, it follows then that AI algorithms are programming that guide a computer to learn from data and perform tasks on its own. 

There are many different types of AI algorithms. Some algorithms are trained on data and taught what to look for (supervised learning), and the technology uses this data to make predictions. An example is Google Maps’ traffic prediction – it uses both historical and real-time data to predict how long it will take you to get from Point A to Point B. Other examples are facial recognition and autocorrect.

Other algorithms are given data without being taught what to look for (unsupervised learning). These algorithms are programmed to identify patterns or associations within the raw data and make predictions based on these patterns. An example of unsupervised learning in AI algorithms is Netflix’s movie/TV show suggestions – the raw data is your viewing history (as well as the viewing histories of people Netflix thinks are similar to you) and the AI algorithm makes recommendations based on identified viewing patterns. Another example of this is targeted advertising, which decides which ads to show you based on things that the platform thinks it knows about you.

One more type of AI algorithm is reinforcement learning, where the algorithm employs an action and adapts (learns) based on a response from the environment. An example of this is adaptive cruise control – the car executes an action (speeding up), intakes data from the environment (an object is impeding its path, for example), and adjusts based on that feedback (slows down). 

AI algorithms rely on data input. This is why so many companies want your data – the more they know about you and how your data compares to others’, the more accurate their algorithms are, and the more they can charge for advertising or subscriptions. 

Due to the relationship between AI algorithms and the humans who create them, AI algorithms can learn and amplify the inherent and systemic biases that exist in society. Biases can be found in training data, the algorithm, and the predictions it makes. An example of these biases can be seen in predictive policing tools. Some of these tools use historical arrest and crime data, which reflects the systemic racism in our justice system ( Canadian article). Other tools use facial recognition; however, studies show that while facial recognition very accurately identifies white males, it has a large margin of error for people of color, and especially Black women ( Canadian article). Generative AI can be even more biased because it is often trained on data sets that are already more biased than reality, like stock photo libraries. 

How can we help kids use generative AI tools responsibly? And what about plagiarism?  

Here are some ways to help kids use AI tools responsibly: 

  • Teach kids the basics of how generative AI programs work. Learning about AI basics with a trusted adult provides a safe space for kids to explore how generative AI works in a safe and responsible way.
  • Talk about the ethical concerns related to generative AI, including bias, plagiarism ( stealing artists data for AI image generation, for example), deepfakes, and data privacy violations. For example, you can discuss what would be an ethical use of generative AI in school. Is it ethical to have ChatGPT critique your essay but not write it? Could you use Dall-E to make images for a school presentation if you’re not being marked on the quality of the art? 
  • Encourage a critical perspective of generative AI, which equips kids with the tools necessary to critically engage with these programs and media that may have been generated by them. (Make sure kids understand the ways in which AI, and generative AI in particular, can produce biased results.) This empowers kids to make informed decisions regarding the programs they use and the data they share. 
  • Set rules regarding generative AI in the same way you would other applications, whatever that might look like for your home (time limits, discussions of the types of generative content they can use, supervised use), and discuss consequences for misuse. 
  • Ensure kids are using age-appropriate tools and talk to them about privacy concerns related chatbots, such as the one Snapchat uses
  • Stay informed regarding any changes introduced to the programs.
  • New AI tools like ChatGPT raise new concerns around plagiarism, but many of the approaches to dealing with plagiarism remain the same as they were when students were just copying and pasting existing material from the internet. See our article on Responding to Plagiarism for more information. 

It’s easier than ever to create fake images and video. How can we tell what's really true online, and how can we help our kids learn this?  

This is a great place to remind kids that the pictures/videos that elicit strong emotions should be questioned. There’s an ad out now by the CSE/Government of Canada that says, “If it raises your eyebrow, it should raise questions.”

A lot of advice you’ll find on the internet will advise to check for inaccuracies in the person presented – that the mouth is moving in sync with the words, missing teeth, inaccurate body parts (crossed eyes, extra fingers), etc., but as deepfakes get better, these tell-tale signs are becoming less and less frequent. Also, looking for errors can make you see deepfakes that aren’t there: some people really do have six fingers on each hand! 

As with any story you aren’t sure about, it’s always a good idea to use these four steps to check to see if something is true before you believe it and especially before you share it. You may only need to do one of these steps! 

Here are a few quick tips:

  • For images, try reverse image searching to find whether it has been shared by a reputable source. TinEye will let you sort by date or “most changed,” so you can find if an old image is being repurposed for a new conflict.
  • Check to make sure that the source is credible (read its Wikipedia entry to make sure it has a good track record and a process for maintaining accuracy and correcting mistakes). 
  • Check a fact-checker site or engine – MediaSmarts has created a custom search that lets you search all the fact-checkers for a story in one place.

What are some of the benefits of AI tools and how can we use them to their maximum potential?  

It’s natural to feel skeptical about AI, as with any new technology, but AI tools come with some benefits: 

  • They can foster creativity and innovation and give youth an opportunity to create – whether it’s with images, storytelling, film or music. 
  • They can contribute to authentic learning and have real-life applications. They can help with projects that require a lot of repetition, particularly in STEM. 
  • They can help with collaborative learning. The iterative process inherent in generative AI can encourage critical thinking and problem solving. 
  • They encourage learning and asking questions in an accessible way (while not having to sort through Google hits) but with the downside of needing to verify what is being generated. This can still provide a great place to start with foundational knowledge. 
  • They can make our lives easier with features like voice to text, object/person detection (taking pictures), facial recognition (photo apps in phones), personalized recommendations (music apps, streaming apps), language translation and virtual helpers (Alexa, Siri) 
  • They can be a tool for brainstorming ideas
  • They can be helpful resources for learners whose first language isn’t English or French. 
  • They’re accessible to those with diverse needs. 

In the best cases, when used to their maximum potential, these tools can give kids a safe space to explore! Parents and guardians should sit with them and explore together, and teachers can look at new ways of integrating AI learning tools into the classroom.

Teaching kids about how AI tools – how they work, ethics and responsible use – and encouraging them to explore and become familiar with AI tools allows youth to expand knowledge and create new things while also empowering them to avoid manipulation by AI. 

MediaSmarts tools and information related to AI: 

For the classroom

Research