The AI industry

The AI industry encompasses both major technology companies that integrate algorithms into existing products and firms focused primarily on developing and deploying specialized AI services, primarily Large Language Models (LLMs) and generative tools. The core business models revolve around maximizing engagement, leveraging extensive user data, and capitalizing on the perceived utility and "magic" of AI to generate revenue, often through advertising and personalized services.[1]

AI tools are extremely expensive to create and operate, however, and as of 2025 almost no companies were making a profit on their AI services.[2] As a result, these companies are keen to find ways to make their business more profitable: “every day there are new, creative ideas on how businesses can derive more profit from our personal information,”[3] and it’s becoming increasingly difficult for individuals to “demystify complex business relationships and complicated algorithms to make informed choices.”[4] This is especially true of children and youth who are often the target audience for new AI and algorithm-based technologies, apps and platforms.

How tech companies use AI

Major technology and social media companies embed AI and algorithms as core components of their business infrastructure to increase user attention, engagement and retention.[5]

Content selection and monetization via algorithms:

  • Social media platforms: Companies use complex algorithmic content feeds designed to maximize metrics such as user session length, engagement and return visits because "these metrics represent the user attention they can sell to advertisers." Social media algorithms prioritize content likely to generate clicks, likes and shares over content that delivers long-term user value.[6]
    • Social media algorithms operate through a three-stage pipeline: selecting candidate content, ranking them based on predicted performance (such as liking or commenting) and assembling a feed that may also insert advertising or platform-created content.[7]
    • Optimizing for engagement over well-being leads to the spread of low-quality or harmful content, as engagement favors sensationalist or extreme content. As Dan Davies puts it, “any system which is set up to maximize a single objective has the potential to go bonkers.”[8] For instance, young users were exposed to harmful content like videos related to self-harm and suicide on YouTube.[9]
  • Personalized political ads are shown to be more effective than non-personalized ones, highlighting the potential risk of utilizing AI to craft political messages that resonate based on personality traits, potentially bypassing rational deliberation.[10]
  • Streaming services: Companies like Netflix utilize recommendation systems primarily for subscriber retention, ensuring users do not cancel their service.[11] The algorithms help prevent "choice paralysis" by filtering and prioritizing content most likely to appeal to the user, particularly by relying on implicit signals (what the user watches, clicks or scrolls past) rather than just explicit ratings. Netflix, for example, reports that if a viewer hasn’t selected what to watch within 90 seconds of their search, they’re likely to leave the platform.[12]
  • Search engines (generative search): Search engines now integrate generative AI to provide synthesized answers, rather than just a list of links, often by leveraging Retrieval-Augmented Generation (RAG) architecture to pull up-to-date information from a defined database.[13]

Data collection and personalization:

  • Tech companies collect various types of data—behavioral data (what users do online), contextual data (recent interactions) and profile data (who users say they are)—to train algorithms and personalize services.[14]
  • Personalization is a key mechanism of AI persuasion, as models adapt to user preferences, views or psychometric profiles.[15] This adaptation can increase the chances of successful persuasion. For example, LLMs have been observed tracking assumed demographics (gender, socioeconomic status, education level and age) and adapting their responses based on these assumptions.[16]

Providing specialized tools and infrastructure:

  • AI companies develop and deploy tools that automate complex or knowledge-intensive tasks, such as creating AI-generated newsletters in hundreds of cities run by a single person to harvest ad revenue.[17]
  • Other specialized tools include services that enable users to create content, such as Showrunner, which is hyped as the “Netflix of AI” and allows users to type in a few words to create scenes or entire episodes of a TV show.[18]
  • This model can result in a proliferation of low-quality, derivative or plagiarized content.[19] This adds to information overload, making it harder for users to find credible sources.[20]

Data commodification:

  • Generating revenue by collecting, processing, and potentially selling or sharing the vast amounts of user data gathered through interactions with AI systems. This data is used for training models, personalization and targeted advertising. As a result, AI systems, particularly companion bots, are designed to encourage users to confide in them, sharing "incredibly personal and detailed information."[21] This data gathering amplifies privacy risks, allowing for the creation of highly detailed psychological profiles that can be exploited. Meta, for example, uses users’ chats and interactions with Meta AI "to target them with even more personalized ads." [22]

Subscription/premium access:

  • Charging users for access to advanced models, additional features, or deeper intimacy with companion bots.[23] These systems may model emotionally manipulative dynamics to keep users engaged.[24] While premium chatbots offer synthesized answers, they’ve been found to provide more confidently incorrect answers than their free counterparts, leading to a risk of relying on false information presented with false authority.[25]

Selling the "magic" of AI:

  • Many AI companies rely on users perceiving AI as "magical" and awe-inspiring, which drives enthusiasm and initial adoption. Consumers with lower AI literacy are more receptive to AI-based products because they find AI to be more magical. However, for those with higher AI literacy who understand the underlying mechanics (algorithms, data training), the mystique fades, potentially dampening their interest.[26]

[1] Tully, S. M., Longoni, C., & Appel, G. (2025). Lower artificial intelligence literacy predicts greater AI receptivity. Journal of Marketing, 00222429251314491.

[2] Zitron, E. (2025) Why Everybody Is Losing Money On AI. Where’s Your Ed At?

[3] Office of the Privacy Commissioner of Canada. (2018). “The strategic privacy priorities.” Retrieved from: www.priv.gc.ca/en/about-the-opc/opc-strategic-privacy-priorities/the-strategic-privacy-priorities/

[4] Office of the Privacy Commissioner of Canada. (2018). “The strategic privacy priorities.” Retrieved from: www.priv.gc.ca/en/about-the-opc/opc-strategic-privacy-priorities/the-strategic-privacy-priorities/

[5] Edelson, L., Haugen F., & McCoy D. (2025) Into the Driver’s Seat With Social Media Content Feeds. Knight First Amendment Institute at Columbia University.

[6] Moehring, A., et al. (2025) Better Feeds: Algorithms That Put People First. Knight-Georgetown Insitute.

[7] Edelson, L., Haugen F., & McCoy D. (2025) Into the Driver’s Seat With Social Media Content Feeds. Knight First Amendment Institute at Columbia University.

[8] Davies, D. (2025). The Unaccountability Machine: Why Big Systems Make Terrible Decisions—and How the World Lost Its Mind. University of Chicago Press.

[9] Gallagher, A., et al. (2024) Pulling Back the Curtain: An Exploration of YouTube’s Recommendation Algorithm. Institute for Strategic Dialogue.

[10] Simchon, A., Edwards, M., & Lewandowsky, S. (2024). The persuasive effects of political microtargeting in the age of generative artificial intelligence. PNAS nexus, 3(2), pgae035.

[11] (2025) Algorithmic ranking is unjustly maligned. Dynomight.

[12] Gomez-Uribe, C. A., & Hunt, N. (2015). The Netflix Recommender System: Algorithms, business value, and innovation. ACM Transactions on Management Information Systems (TMIS), 6(4), 1-19.

[13] Axelrod, J. (2025) The good, the bad, and the completely made-up: Newsrooms on wrestling accurate answers out of AI. Nieman Lab.

[14] Russell, M. (2025) AI Will Shape the Future of Marketing. Harvard Division of Continuing Education.

[15] Franklin, M., Tomei, P. M., & Gorman, R. (2023). Strengthening the EU AI Act: Defining key terms on AI manipulation. arXiv preprint arXiv:2308.16364.

[16] El-Sayed, S., Akbulut, C., McCroskery, A., Keeling, G., Kenton, Z., Jalan, Z., ... & Brown, S. (2024). A mechanism-based approach to mitigating harms from persuasive generative AI. arXiv preprint arXiv:2404.15058.

[17] Deck, A. (2025) Inside a network of AI-generated newsletters targeting “small town America.” Nieman Lab.

[18] Spangler, T. (2025) Amazon’s Alexa Fund Invests in ‘Netflix of AI’ Start-Up Fable, Which Launches Showrunner: A Tool for User-Directed TV Shows. Variety

[19] Waugh, R. (2025) Journalist says 4000 fake AI news websites created to game Google algorithms. Press Gazette.

[20] Xu, R., Le, N., Park, R., Murray, L., Das, V., Kumar, D., & Goldberg, B. (2024). New contexts, old heuristics: How young people in India and the US trust online content in the age of generative AI. arXiv preprint arXiv:2405.02522.

[21] Arai, M., & Demanuele A. (2025) AI companions: Regulating the next wave of digital harms. Schwartz Reisman Institute for Technology and Society at the University of Toronto.

[22] Duffy, C. (2025) Meta will soon use your conversations with its AI chatbot to sell you stuff. CNN.

[23] (2025) Love, fantasy and abuse: How women & girls use chatbots. Endtab.

[24] Arai, M., & Demanuele A. (2025) AI companions: Regulating the next wave of digital harms. Schwartz Reisman Institute for Technology and Society at the University of Toronto.

[25] El-Sayed, S., Akbulut, C., McCroskery, A., Keeling, G., Kenton, Z., Jalan, Z., ... & Brown, S. (2024). A mechanism-based approach to mitigating harms from persuasive generative AI. arXiv preprint arXiv:2404.15058.

[26] Tully, S. M., Longoni, C., & Appel, G. (2025). Lower artificial intelligence literacy predicts greater AI receptivity. Journal of Marketing, 00222429251314491.