Youth Perspectives on the Use of Artificial Intelligence in Paediatric Healthcare

Consultative Report

Artificial intelligence (AI) is rapidly becoming part of healthcare systems, including paediatric care. While AI has the potential to improve diagnosis, research and healthcare efficiency, it also raises important ethical questions about consent, privacy, trust and decision-making.

This consultative report highlights young people's perspectives on the use of AI in paediatric healthcare and outlines the conditions under which youth believe AI should be used in their care. The report was prepared by MediaSmarts in collaboration with the McCradden Lab as part of a Canadian Institutes of Health Research (CIHR) project exploring ethical AI use in children's healthcare.

For more information:

Drawing on insights from youth aged 10–17 in a mixed-methods study (2021–2024) led by Dr. Melissa McCradden — including 28 interviews, a youth workshop and a survey of more than 300 young people — the report and guide offer recommendations to support the responsible integration of AI in paediatric care while respecting children's digital rights.

Key Findings and Recommendations

Youth participating in this research generally expressed neutral to positive attitudes toward AI in healthcare but emphasized that its use must meet specific conditions to be acceptable. Four key themes emerged from their perspectives, which inform four key recommendations for the responsible integration of AI in paediatric care:

Autonomy and Consent: Young people want to know when AI is used in their healthcare and believe they should have meaningful opportunities to give or refuse consent. Healthcare providers should clearly explain if AI is involved in decisions made about patient care, how the AI tool(s) work, and how patient data may be used.

Human Oversight: Youth felt strongly that AI should support healthcare professionals rather than replace them. AI can be a part of decision-making, but never a replacement for human judgment and interaction. Doctors and medical staff should review AI outputs and remain responsible for final decisions about patient care.

Trust and Transparency: Trust in healthcare institutions and practitioners is central to youth acceptance of AI. Open communication about how AI tools work, their limitations, and how they influence care decisions helps build that trust.

Data Privacy and Protection:Participants raised strong concerns about privacy and the long-term use of health data. Clear consent processes, strong safeguards against data breaches or misuse, and respect for patient control over personal data are essential.

Together, these themes outline the conditions young people believe should guide the responsible integration of AI into paediatric healthcare while respecting children's digital rights.

Resources
To support conversations about AI in healthcare, MediaSmarts developed additional practical resources based on the research:

Guide for Healthcare Practitioners: What to Know, What to Ask, What to Say: A Guide for Healthcare Practitioners and Decision-makers Considering the Use of AI

This guide provides recommendations for clinicians and healthcare organizations integrating AI into patient care. It outlines key ethical principles — including autonomy and consent, human oversight, transparency and data protection — and offers practical questions practitioners should consider when evaluating or implementing AI tools.

Questions Patients and Families Can Ask: Ten Questions to Ask About AI in Healthcare

As AI becomes more common in healthcare settings, patients and families may want to ask questions about how these tools are being used in their care. This resource helps patients understand their rights and encourages informed discussions with healthcare providers about AI use, data privacy, and human oversight in medical decisions.