New Report: “The State of Generative AI Use in Canada 2025: Exploring Public Attitudes and Adoption Trends”
The report linked below was recently shared on Figshare.
From the Social Media Lab (Ted Rogers School of Management at Toronto Metropolitan University):
The State of Generative AI Use in Canada 2025, authored by Dr. Anatoliy Gruzd, Philip Mai, and Anthony Clements Haines of Toronto Metropolitan University, draws on a census‑balanced survey of 1,500 adults conducted between February 19 and March 1, 2025. The report charts the nation’s rapid adoption of text‑, image‑, audio‑ and video‑generation technologies while spotlighting growing public unease around ethics, privacy, and job security.
Key Findings
- Adoption—but mostly casual: Two‑thirds of Canadians (66%) have tried a GenAI tool, yet only about 30% use them daily or weekly for leisure, work, or study. Leisure remains the primary entry point, especially for older adults, while younger Canadians lead usage for study and work.
- Knowledge & skills gap: Only 38% feel confident they can use GenAI effectively or keep up with new developments. On a seven‑item quiz, respondents averaged just 2.5 correct answers, and 51% admit they have little to no understanding of how AI companies handle their data.
- AI in the newsroom: A majority believe news outlets already rely on GenAI for editing (57%), translation (56%) and data analysis (51%); 43% think AI writes entire articles. Comfort with AI‑generated content is highest for lifestyle and entertainment topics and lowest for politics, crime, and international affairs.
- Election anxiety: Two‑thirds (67%) worry GenAI could sway election outcomes, and 59% say they no longer fully trust political news online because of possible AI manipulation. More than half (54%) are unlikely to use chatbots for election information, though openness is higher among right‑leaning Canadians (34%) than left‑leaning ones (23%).
- Mixed outlook, strong oversight demand: Canadians are split on GenAI’s net societal impact (39% positive, 34% negative, 27% neutral), but unite around key concerns—security and privacy (72%), reliability of information (68%), job displacement (68%), and effects on higher education (68%). An overwhelming majority back regulation: 78% want companies held liable for harms caused by AI tools, with 77% supporting rules for current AI capabilities and 76% for future ones.
Recommendations
The report highlights actionable guidance, including:
- Policy: Enact transparent data‑handling standards and mandatory risk assessments for high-impact deployments of GenAI tools.
- Education: Integrate GenAI literacy into K‑12 and post‑secondary curricula to bolster critical thinking and technical skills.
- Industry: Develop clear disclosure practices so users understand when and how GenAI is involved in content creation.
Direct to Complete Announcement
Direct to Full Text Report
Authors: Anatoliy Gruzd, Philip Mai, Anthony Clements Haines
27 pages; PDF.
Filed under: Data Files, Management and Leadership, News, Patrons and Users
About Gary Price
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area. He earned his MLIS degree from Wayne State University in Detroit. Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.



