Harvard Business Review: The Case for Using Small Language Models (SLMs)
From the Harvard Business Review:
SLMs have many competitive advantages over LLMs. Their architectures (Microsoft’s Phi-3, Google’s Gemma, Apple OpenELM, IBM Granite) are more compact, specialized, energy-efficient, and easier to deploy than LLMs, especially on local edge computing devices. SLMs have significantly fewer parameters, typically ranging from a few billions, compared to LLMs, which can contain hundreds of billions or even trillions. This substantial difference in size provides several advantages for SLMs over LLMs, including lower computational requirements, faster training times, easier deployment, and more efficient performance in specific scenarios.
In this article we explore why SLMs are often better suited for many AI use cases, including agentic AI applications, and what business leaders should consider as they shape their AI strategies.
The remainder of the article is organized into the following sections :
- Speed and Efficiency Drive Competitive Advantage
- Specialization can Outperform Generalization
- Agentic AI with More Control and Better Privacy
- Safer for Prototyping, Experimentation, and Embedding in Workflows
- What Executives Should Do with SLMs
Direct to Full Text (about 1600 words)
Filed under: News
About Gary Price
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area. He earned his MLIS degree from Wayne State University in Detroit. Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.


