What is AI?
AI (artificial intelligence) is a way of using computer algorithms to do things with little or no human involvement.
An algorithm is basically a series of steps or instructions for doing something. AI algorithms aren’t programmed but trained. This means they’re given a data set to learn from, such as a collection of millions of pictures or written texts. They find patterns or connections in the data set and use those to solve the problem they’ve been programmed to solve.
“You don’t need to produce a precise list of instructions and communicate them… You give the machine data, a goal and feedback when it’s on the right track – and leave it to work out the best way of achieving the end.” Hannah Fry, Hello World
Many tools and platforms employ multiple algorithms with different purposes during what seems like a single interaction to a user or consumer. For example, suppose a user enters the query "bass" into Google. In that case, the search engine will likely classify the search (guessing whether the user is looking for information on bass fish or a bass guitar, for instance), partly by associating the user with their past searches and other data Google possesses about them (for example: fishing websites visited when using the Chrome browser; emails about fishing sent through Gmail).
AI algorithms are much more powerful and flexible than algorithms written by humans, but they’re also harder to analyze and to understand. While we might know what data goes into the algorithm, and we can see what is produced by them, we can’t easily tell the process between the two. This is why AI is sometimes referred to as a “black box.” Because the most sophisticated ones are able to change and adapt over time, even the people who make and operate them may not fully know how they work. As the founder of cybernetics, Stafford Beer, put it, “it is not necessary to enter the black box to understand the nature of function it performs”:[1] an algorithmic system, such as AI, can be judged by its results and effects rather than its functioning or the intent of its designers.
Even though AI isn’t programmed in the traditional way, people are still a necessary piece of the training process. They give feedback by rating the quality of answers, captioning or annotating things in the data set, or testing to make sure that it doesn’t produce graphic, violent or other inappropriate content.[2] AI “seems so human because it was trained by an AI that was mimicking humans who were rating an AI that was mimicking humans who were pretending to be a better version of an AI that was trained on human writing.”[3]
While we might know what data goes into the algorithm, and we can see what is produced by them, the algorithmic system or process itself ‘refuses to be known’ due to the value in their opacity.[4] This secrecy makes it difficult to fully understand the impacts of algorithmic systems. What we do know is that machine learning algorithms learn from behavioural data (what we do online), contextual data (our most recent interactions online) and profile data (who we (say we) are online).
A general concern about algorithms is the unintended or unanticipated consequences – including bias.[5],[6],[7] Algorithmic bias can be found across platforms and enters into algorithmic and AI systems because of pre-existing social, cultural and institutional inequities that impact decisions about how data are collected, selected, coded or used to train algorithms. Another concern is that systems that use AI or machine learning algorithms to make decisions, like recommendation algorithms, can create what author Dan Davies calls an “accountability sink” that “prevent[s] the feedback of the person affected by the decision from affecting the operation of the system.”[8]
Children and youth are interacting with algorithms and AI daily. AI is embedded in toys, video games, voice-activated assistants, learning apps and educational software. "Algorithms provide recommendations to children on what videos to watch next, what news to read, what music to listen to and who to be friends with."[9] Along with these direct interactions is the potential for children and youth to be indirectly impacted by AI systems, as this technology is increasingly being integrated in decision-making systems embedded in healthcare practices, educational outcomes, and welfare subsidies, and housing applications.[10] [11] However, little work has focused on how youth interact with and are likely to be affected by AI."[12] Research suggests we know even less about the long-term or future consequences of algorithms, especially for children and youth.[13]
Algorithmic literacy does not only refer “strictly to being able to read and write in code” but also involves “being aware of the presence of algorithms… and the increasing role they play, both for good and for bad.”[14] Building a “fundamental vocabulary” so youth can both understand and shape algorithms[15] empowers them “to exercise critical thinking in how they engage online, and to become proactive, creative users and makers rather than passive consumers.”[16] Given that young people “are growing up in a world surrounded by AI,” we need more robust algorithmic literacy tools, resources and programmes to help them become “more critical consumers,” to maintain control of their personal information and “motivate them to help shape [the] future” of this technology.[17]
Given the impact that AI has, and will continue to have, on the lives of youth in Canada, "it is critical that young people have a basic understanding of what AI is and the ethical, societal, and privacy- and safety-related implications of these technologies."
1] Quoted in Davies, D. (2025). The Unaccountability Machine: Why Big Systems Make Terrible Decisions—and How the World Lost Its Mind. University of Chicago Press.
[2] Hao, K., & Seetharaman D. (2023) Cleaning Up ChatGPT Takes Heavy Toll on Human Workers. The Wall Street Journal.
[3] Dzieza, J. (2023) AI Is A Lot of Work. New York. https://nymag.com/intelligencer/article/ai-artificial-intelligence-humans-technology-business-factory.html
[4] Gillespie, T., MacPhee, C., & Reddeb, J. (2020). Panel presentation on AI & algorithms. eQuality Project Annual General Meeting. Virtual. Retrieved from: www.equalityproject.ca/resources/ai-algorithms/
[5] Dignum, V., K., Pigmans, S., Vosloo, S. & M. Penagos. (2020). Policy guidance on AI for children. UNICEF. Retrieved from: www.unicef.org/globalinsight/media/1171/file/UNICEF-Global-Insight-policy-guidance-AI-children-draft-1.0-2020.pdf
[6] O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
[7] Noble, S. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
[8] Davies, D. (2025). The Unaccountability Machine: Why Big Systems Make Terrible Decisions—and How the World Lost Its Mind. University of Chicago Press.
[9] Dignum,V., K., Pigmans, S., Vosloo, S. and M. Penagos. (2020). ““Policy Guidance on AI for Children.” UNICEF, The Office of Global Insight and Policy.”” The United Nations Children’s Fund. 1-48. Retrieved from: www.unicef.org/globalinsight/media/1171/file/UNICEF-Global-Insight-policy-guidance-AI-children-draft-1.0-2020.pdf
[10] Dignum,V., K., Pigmans, S., Vosloo, S. and M. Penagos. (2020). ““Policy Guidance on AI for Children.” UNICEF, The Office of Global Insight and Policy.”” The United Nations Children’s Fund. 1-48. Retrieved from: www.unicef.org/globalinsight/media/1171/file/UNICEF-Global-Insight-policy-guidance-AI-children-draft-1.0-2020.pdf
[11] In the Canadian context, the use of AI technology has been primarily in law enforcement contexts (in predictive policing and social media surveillance) and there is not yet any confirmation that AI systems are being used in the social service sector.
[12] Hasse, A., Cortesi, S., Lombana-Bermudez, A., & Gasser, U. (2019). « Youth and artificial intelligence: Where we stand.” Youth and Media, Berkman Klein Center for Internet & Society. Retrieved from www.cyber.harvard.edu/publication/2019/youth-andartificial-intelligence/where-we-stand
[13] See: Dignum,V., K., Pigmans, S., Vosloo, S. and M. Penagos. (2020). ““Policy Guidance on AI for Children.” UNICEF, The Office of Global Insight and Policy.” The United Nations Children’s Fund. 1-48. Retrieved from: www.unicef.org/globalinsight/media/1171/file/UNICEF-Global-Insight-policyguidanc e-AI-children-draft-1.0-2020.pdf
[14] Oldridge, M. (2017). “Algorithmic Literacy.” Medium. Retrieved from: www.matthewoldridge.medium.com/algorithmic-literacy-1d9b5f087142
[15] Waters, J. (2020). “The Canadian Primer to Computational Thinking and Code: A Kids Code Jeunesse Introduction to Algorithmic Literacy.” Kids Code Jeunesse. Retrieved from: www.algorithmliteracy.org/data/resources/en/KCJ_Computational_Thinking_Primer_EN.pdf
[16] Kids Code Jeunesse & CCUNESCO. (2021). “The Algorithm Literacy Project.” Retrieved from: www.algorithmliteracy.org
[17] Hao, K. (2019). “Kids are surrounded by AI. They should know how it works.” Retrieved from: www.medium.com/mit-technology-review/kids-are-surrounded-by-ai-they-should-know-how-it-works-ae15756f1085