Steffen Becker Ruhr University Bochum, Germany Max Planck Institute for Security and Privacy
Greta Ontrup University of Duisburg-Essen, Germany
Source
via arXiv
DOI: 10.48550/arXiv.2505.10490
Abstract
As the use of Large Language Models (LLMs) by students, lecturers and researchers becomes more prevalent, universities – like other organizations – are pressed to develop coherent AI strategies. LLMs as-a-Service (LLMaaS) offer accessible pre-trained models, customizable to specific (business) needs. While most studies prioritize data, model, or infrastructure adaptations (e.g., model finetuning), we focus on user-salient customizations, like interface changes and corporate branding, which we argue influence users’ trust and usage patterns. This study serves as a functional prequel to a large-scale field study in which we examine how students and employees at a German university perceive and use their institution’s customized LLMaaS compared to ChatGPT. The goals of this prequel are to stimulate discussions on psychological effects of LLMaaS customizations and refine our research approach through feedback. Our forthcoming findings will deepen the understanding of trust dynamics in LLMs, providing practical guidance for organizations considering LLMaaS deployment.
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area.
He earned his MLIS degree from Wayne State University in Detroit.
Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.