small language model
Lexicon ex Machina
small language model /smɔːl ˈlæŋ.ɡwɪdʒ ˈmɒd.əl/ n.
noun
A human. “We replaced the chatbot with a small language model—an intern who reads email.”
(technical) A language model with relatively few parameters, typically under 10 billion, optimized for efficiency over capability. “The small language model ran on-device but thought the capital of France was ‘baguette.’”
(deprecating) A person whose responses are predictable, formulaic, or suspiciously on-brand. “He’s a small language model—give him any input, he outputs ‘let’s circle back.’”
Derivatives
SLM abbr. Small language model. “The SLM handled autocomplete; anything harder went to the cloud.”
smol adj. (informal) Affectionate diminutive for small models, implying endearing incompetence. “The smol model tried its best but hallucinated an entire API.”
Usage Note (sense 1) The joke relies on the observation that humans are, in fact, language models—trained on data, prone to hallucination, occasionally useful, frequently overconfident. Unlike large language models, small language models require wages, sleep, and emotional validation.
Usage Note (sense 2) “Small” is relative and shifts over time. Models once considered large are retroactively reclassified as small when larger models emerge. This ensures no model feels good about itself for long.
Antonyms large language model, foundation model, that thing burning $50M/month in compute
See Also edge deployment, on-device inference, human in the loop
Origin 2020s. Sense 1 emerged as gallows humor among ML engineers; sense 2 from genuine industry terminology; sense 3 as office slang.

