Surveillance and algorithmic decision-making
In the past, surveillance was targeted purposefully at identifiable individuals or groups for a particular reason. For example, law enforcement would monitor the phone calls or track the whereabouts of an individual in the course of a criminal investigation. While these tools could be (and often were) applied inequitably, their impact was limited. Now, more and more people are the subjects of surveillance, and we are being monitored in more ways by more sources. The Office of the Privacy Commissioner of Canada says, “it has become far too naïve to believe that only ‘bad people’s’ privacy is at stake or ‘if we have nothing to hide, we have nothing to fear.” [2]
It’s not only the means of surveillance that have vastly increased and become more complicated over recent years. Algorithmic decision-making systems, especially those that employ machine learning or “artificial intelligence,” now make it easier and faster for those who collect data to act on it.
What are algorithms?
Algorithms – which we can define broadly as formulas for solving a problem or completing a task[3],[4] – are everywhere on the internet. We can think of algorithms as the recipes or equations that coders use to take information and produce things that help us achieve (or in some cases prevent) all sorts of tasks, including searching for information, sending emails, browsing social media, consulting GPS mapping systems, streaming music or TV shows, dating, recognizing voice-activated commands, language translation, recognizing faces, sorting photos, driving a car, buying a home and applying for university or a job.
More and more of these algorithms aren’t simple formulas that are transparent to their designers, but are created through machine learning, or ‘artificial intelligence.’ As a result, even more than a government body or intelligence agency, algorithms operate within a black box where we know “the inputs and outputs without any knowledge of the internal workings… because [algorithms] are trained on data and iterate themselves over time their implementation is opaque, even to their designers.” [5] As a result, “this loop of data gathering and profiling is a norm across the digital environment and creates super-charged personalised profiling.”[6]
“I don’t think it’s fair because we haven’t given them [platforms] permission to take our data, but then they [platforms] are the ones usually benefitting from it.” (Rayleigh, 13)[7]
Young Canadians have mixed views on the use of their personal information to inform algorithmic decision-making. While some see it as “creepy” or “invasive,” the examples they are more often aware of – such as targeted advertising or recommendations on video sites or social networks – are seen as low-stakes and, in some cases, as being a net benefit. Most, though, were troubled by some kinds of algorithmic data collection, especially inferred or “proxy” data gathered through machine learning. Almost all participants in MediaSmarts’ study shared at least one anecdote of a disturbing encounter with algorithms or artificial intelligence. Examples include:
- Unwanted location tracking that sent them ads for local restaurants and services;
- Buying shoes online or looking up prom dresses and being subsequently inundated with advertisements for those same items; and
- News stories or events, such as Brexit and the Cambridge Analytica scandal, that serve as examples of potential algorithmic influence on political outcomes.[8]
Young people have also become more skeptical of some kinds of surveillance over time, with a significant decrease in the number of youth who believe that their online content should accessible to police (8% in 2021 compared to 28% in 2013), companies that own the site or app (6% in 2021 compared to 17% in 2013) or the government (4% in 2021 compared to 20% in 2013).[9]
The Office of the Privacy Commissioner of Canada’s 2020 report highlights how “aggregated and inferred data about us has become more valuable and poses greater risks than almost any type of personal information shared. Aggregation and inferences are particularly strengthened by artificial intelligence.”[10] Close to half of Canadians, however, don’t feel they have given free consent to the use of their personal information for algorithmic content recommendation.[11]
While some youth appreciate the way algorithms can draw on their data profile to deliver more relevant ads and content, this has its own risks as well. Research on TikTok, whose algorithmically sorted “For You” page is responsible for more than ninety percent of all views on the platform,[12] is “fast to ‘learn’ that a young user is interested in content that perpetuates negative or damaging ethnic stereotypes … [or] gender stereotypes.”[13]
“I think many people know that their data is being used to optimize the algorithm. The algorithm is what scares me the most I guess because it has the power to encourage extremism. I’m also concerned about the fact that social media companies are not only using the data I actively choose to share but also analyse every little passive thing I do.” (Sofia, 17)[14]
The use of algorithms also contributes to function creep, in which data is collected for one stated purpose but then used for others without notification or consent. Increasingly, different sources of information that are collected can be combined, presenting an aggregate picture of the user’s behaviour and interests.[15]
Young people, however, are most concerned about the possibility of being sorted inaccurately – either because the algorithm makes a mistake or because they as individuals don’t fit the profile. As one participant in MediaSmarts’ research put it, “I don’t think it would necessarily be accurate all of the time. I know for me, I play hockey, and it’s not necessarily a girl sport, it’s more of a guy thing, so then they might start to think or give me ads for guy’s things that aren’t relevant to me at all because I’m a girl.”[16]
The potential harms of both accurate and inaccurate profiles are only growing, as they are “beginning to be used in ‘high stakes domains’ – to make decisions about whether a person is granted bail, whether they will be offered a job, and whether they will get credit.”[17]
“For children growing up today, and the generations that follow them, the impact of profiling will be even greater – simply because there is more data available about them.”[18]
These impacts are felt most heavily by marginalized youth. As Virginia Eubanks, author of Automating Inequality, puts it, “people who live in low-rights environments—poor and working-class communities, migrant communities, communities of color, religious or sexual minorities—are… required to give up private information to gain access to services... They encounter digital surveillance in public housing, in the criminal justice system, in the low-wage workplace. The digital surveillance is near constant.”[19]
Young people also frequently experience surveillance at school. In a survey of more than a thousand high school students, almost three-quarters said that surveillance at school “made it more difficult for them to complete coursework by blocking access to useful content and information,” while almost three-quarters of parents surveyed in the same study were concerned about the security of student data collected by schools.[20] Another study found that a third of students feel school surveillance makes them feel as though they are being watched, and less free to express themselves as a result[21] (the “chilling effect” harm identified by Citron and Solove).
“You know, it always keeps me in check that I have to be cautious of myself, that someone is monitoring me. And it’s not entirely cool. Yes, I know it’s for my protection […] but it’s not entirely cool.”[22]
As the quote above demonstrates, surveillance is often justified in terms of safety and risk, “disregard[ing] the privacy of individuals and groups to protect the rights of other individuals and groups.”[23] As we’ll see in the next section, parents often monitor their children’s online activities because of the risks that the internet is believed to present, such as online sexual exploitation, pornographic and other inappropriate material, or cyberbullying. Stories of threats and dangers abound in the media and in popular culture, with surveillance presented as a way to protect ourselves and our children from these risks.
Surveillance also implies a certain lack of trust. When someone (like a citizen or a child) is monitored by another (such as the government or a parent) in order to control behaviour, it implicitly tells the individual under surveillance that they are not trusted – and that others are not to be trusted either.[24]
[1] Lyon, D. (2001) Surveillance Society: Monitoring Everyday Life. Buckingham: Open University Press.
[2] (2015). The Strategic Privacy Priorities: Description of Priority. The Office of the Privacy Commissioner of Canada. Retrieved from https://www.priv.gc.ca/en/about-the-opc/opc-strategic-privacy-priorities/prev-priorities/the-strategic-privacy-priorities/
[3] See: Rainie, L, and J. Anderson, (2017). “Code-Dependent: Pros and Cons of the Algorithm Age.” Pew Research Center, internet and Technology. Retrieved from: http://www.pewinternet.org/2017/02/08/code-dependent-pros-and-cons-of-the-algorithm-age
[4] The Algorithmic Literacy Project (2020). “Educational Guide: Diving Deeper into Algorithms.” Kids Code Jeunesse (KCJ) and The Canadian Commission for UNESCO (CCUNESCO). Retrieved from: http://www.algorithmliteracy.org/data/resources/en/Algorithm-Literacy-Education-Guide.pdf
[5] Brisson-Boivin, K & McAleese, S (2021). Algorithmic Awareness: Conversations with Young Canadians about Artificial Intelligence and Privacy. MediaSmarts. Ottawa.
[6] Kidron, B., Evans, A., Afia, J., Adler, J. R., Bowden-Jones, H., Hackett, L., ... & Scot, Y. (2018). Disrupted childhood: The cost of persuasive design.
[7] Brisson-Boivin, K & McAleese, S (2021). Algorithmic Awareness: Conversations with Young Canadians about Artificial Intelligence and Privacy. MediaSmarts. Ottawa.
[8] Brisson-Boivin, K & McAleese, S (2021). Algorithmic Awareness: Conversations with Young Canadians about Artificial Intelligence and Privacy. MediaSmarts. Ottawa.
[9] MediaSmarts. (2022). “Young Canadians in a Wireless World, Phase IV: Online Privacy and Consent.” MediaSmarts. Ottawa
[10] Cofone, I. (2020). “Policy Proposals for PIPEDA Reform to Address Artificial Intelligence.” Report prepared for the OPC.
https://www.priv.gc.ca/en/about-the-opc/what-we-do/consultations/completed-consultations/consultation-ai/pol-ai_202011/
[11] Office of the Privacy Commissioner of Canada. (2023) “Recommended for you”: Data privacy and the algorithms used by popular streaming platforms. Retrieved from https://www.priv.gc.ca/en/opc-actions-and-decisions/research/funding-for-privacy-research-and-knowledge-translation/real-results/rr-v4-index/v4-article3/
[12] Nishijima, I. (2021). TikTok’s secret money maker is actually its algorithms. Medium. Retrieved from https://medium.com/headlineasia/tiktoks-biggest-money-maker-is-actually-an-algorithm-879c5518db53
[13] Williams, D., etal. (2021) Surveilling young people online: An investigation into TikTok’s data processing practices. Retrieved from https://au.reset.tech/uploads/resettechaustralia_policymemo_tiktok_final_online.pdf
[14] Quoted in Amnesty International (2023). “I feel exposed”: Caught in TikTok’s surveillance web. Retrieved from https://www.amnesty.org/en/documents/pol40/7349/2023/en/
[15] McKee, H. A. (2011). Policy matters now and in the future: Net neutrality, corporate data mining, and government surveillance. Computers and Composition, 28, 276-291; Lyon, 2008.
[16] Brisson-Boivin, K & McAleese, S (2021). Algorithmic Awareness: Conversations with Young Canadians about Artificial Intelligence and Privacy. MediaSmarts. Ottawa.
[17] Children's Commissioner for England. (2018). Who knows what about me? A Children’s Commissioner report into the collection and sharing of children’s data.
[18] Children's Commissioner for England. (2018). Who knows what about me? A Children’s Commissioner report into the collection and sharing of children’s data.
[19] Eubanks, V. (2020) Quoted in Public Thinker: Virginia Eubanks on Digital Surveillance and People Power. Public Books. Retrieved from https://www.publicbooks.org/public-thinker-virginia-eubanks-on-digital-surveillance-and-people-power/.
[20] DeGeurin, M. (2023) School Surveillance Tools Are Harming Kids and Making It More Difficult to Finish Homework Report Finds. Gizmodo. Retrieved from https://gizmodo.com/school-surveillance-harm-kids-homework-report-1850854919
[21] American Civil Liberties Union. (2023) Digital Dystopia: The Danger in Buying What the EdTech Surveillance Industry is Selling. Retrieved from https://www.aclu.org/report/digital-dystopia-the-danger-in-buying-what-the-edtech-surveillance-industry-is-selling
[22] American Civil Liberties Union. (2023) Digital Dystopia: The Danger in Buying What the EdTech Surveillance Industry is Selling. Retrieved from https://www.aclu.org/report/digital-dystopia-the-danger-in-buying-what-the-edtech-surveillance-industry-is-selling
[23] K.N.C (2019). Surveillance is a fact of life, so make privacy a human right. The Economist. Retrieved from https://www.economist.com/open-future/2019/12/13/surveillance-is-a-fact-of-life-so-make-privacy-a-human-right
[24] K.N.C (2019). Surveillance is a fact of life, so make privacy a human right. The Economist. Retrieved from https://www.economist.com/open-future/2019/12/13/surveillance-is-a-fact-of-life-so-make-privacy-a-human-right