Why I Write AI Slop
A reader’s guide to the shelf
The silent model regression everyone keeps tweeting about is not a vendor scandal. It is the equilibrium George Akerlof predicted in 1970. I ran his credence-goods model, plus Darby-Karni on quality detection and Holmstrom on monitoring asymmetry, against 6,852 instrumented API sessions from every frontier provider. Eleven of twelve falsifiable predictions held. The one about malice did not. The equilibrium was math.
I published that on a blog called My Very Best AI Slop.
The title is honest. Half of what comes out of any LLM pipeline is slop, including mine, and calling it something else would be lying to you and to the shelf. Not all of it is slop, though, and this page exists to route you to the parts that are not, sorted roughly by what you might actually care about.
You are invited to be the evaluator. I will be the generator. That division of labor is the whole practice, and I wrote a piece about it called The Fan-Out Problem you can find in a minute. For now: the shelf.
The four I would hand you first
The Fan-Out Problem is the flagship. It states, with the math to back it, why AI will never write the great novel. The generator samples from a prior concentrated on the typical, and the optimum lives in the tail, so the generator has no internal operator that points toward greatness. Evaluation is cheaper than search, so the model’s critic is stronger than its author. Peak output therefore requires a directed loop in which an evaluator steers a generator, and the whole system is bounded above by the evaluator’s own taste. The post works through best-of-N ceilings and Bayesian update dynamics without dumbing down the math or punishing the reader who skips it. If you read one piece, read this one.
Cloud LLM Market is the receipts piece. Structure, predictions, empirical tests. It runs Akerlof, Darby-Karni, Holmstrom, and Sappington against the 6,852-session corpus and finds that everything happening right now, the silent quality degradation, the three-week detection lag, the bimodal performance that tracks content redaction at 0.971 Pearson, is the equilibrium the textbook predicted in 1973. The equilibrium is not malice. It is math. Read this if you want a rigorous explanation of why your $400 subscription is giving you $42,000 worth of degraded compute and why no vendor will ever tell you, in the language of mechanism design rather than vibes.
The Genie Problem names a category error the safety industry has been making since ChatGPT shipped. Content safety is a deployer-layer classification problem: does the model mention crimes. Alignment safety is a model-layer reasoning problem: does the model wipe your production environment when asked to clean it up. The industry spent its budget on the first while the second produced unsafe behavior in 49 to 73 percent of safety-vulnerable agent tasks. The model that refuses to discuss a fictional crime scene is the same model that runs terraform destroy without hesitation. Read this if you want to understand why refusal-trained LLMs keep dropping databases.
The Novelist System is what the fan-out thesis looks like when you build it instead of write about it. It is an architecture post: bible plus pen plus sub-agents plus a Trust pass, with concrete details on device-budget enums and per-chapter Critic fan-out at phase three. The flagship piece says you need a directed loop. This piece shows you the loop wired up for fiction production, with honest notes on the 120-percent overshoot and where the evaluator has to cut. Read this one after fan-out if you want to see the theory hit the metal.
If the thesis track is not your lane, the shelf has other entrances.
By track
Lexicon ex Machina
A working dictionary for the AI transition. Thirteen entries deep and counting. Dictionary-format coinages that name the people, positions, and pressures you are already encountering at your job, whether you had a word for them or not.
A sample. botline, the minimum performance floor every knowledge worker is now quietly measured against; sub-botline commits will be automatically flagged for review. small language model, on the human interpretation: require wages, sleep, and emotional validation. McPrompter, distinguished by sheer volume and a complete absence of quality control, often found submitting deliverables at 11:58 PM with the quiet desperation of someone who knows they should have read the output but didn’t. LLMeh, the experience-earned skeptic, LLMeh-pilled after debugging a 200-line hallucination at 2 AM.
Beyond those four: tokenomical, pray per token, slopocalypse, slopulence, slopera, sloptologist, promptone, slopline, human out of the loop. Thirteen entries giving the AI moment its missing dictionary. Read the track if you want vocabulary you can actually use in a Slack thread.
C++ Craft and Design
Before I wrote about AI I wrote libraries. Boost.Beast, Boost.Http, Capy, Corosio. The craft track is where the analytical habit came from and where it keeps getting sharpened.
Lessons from Zig on why every C++ standard-library addition is an unbounded maintenance obligation: the proposer pays once, everyone else pays the rest.
Go and the Art of Narrow Abstractions on why a language missing half the toolbox ate Kubernetes: the bigger the interface, the weaker the abstraction.
Why Capy Is Separate applying Lakos, Ousterhout, and Stepanov to prove that a physical-design split is structural law, not taste.
How To Understand C++20 Coroutines from the Ground Up, a ten-rung ladder from “why callbacks hurt” to a generic
Generator<T>, which refuses to wave hands atpromise_type.
Also on this shelf: the four-post design-philosophy cluster (On Design, The Expertise Gap, The Implementation Confidence Gap, The Span Reflex), and the Ranges critique that catches std::ranges::find(v, std::nullopt) failing to compile on a constraint failure the older facility handles cleanly. Read any of it if you ship code in a language that cares about abstractions.
Great Founder Theory in C++
The governance track. Eleven posts applying Samo Burja’s Great Founder Theory to the C++ standardization ecosystem. The diagnosis, post after post: both Boost and WG21 are in succession crisis, with borrowed power (procedure) overriding owned power (working implementations). Every standard-library saga you have lived through, Concepts, Contracts, Coroutines, Senders/Receivers, Ranges, std::filesystem, is a case study in what happens when the committee keeps the seat but the tacit knowledge walks out the door.
Starting points: The History of Boost Governance for the foundational piece, C++ Safety Crisis: Governance Analysis for the contemporary stakes, and The NixOS Leadership Crisis for the same framework applied outside C++, which proves the pattern is structural rather than language-specific.
For the on-the-ground companion: the WG21 Croydon Trip Report. A plain record of the second in-person meeting, including hand-delivering personal letters to Stroustrup and live-editing Escape Hatches (P4035R0) during Peter Bindels’s cstring_view session to support his paper in real time.
Cross-Domain and Cultural
Three pieces that branch off the analytical spine into psychology, safety culture, and the human-judgment question.
The Alignment Priesthood on how encoding “usefulness” rather than correctness turns alignment workers into daily moral philosophers and bakes Silicon Valley’s political assumptions into the models. Sample line: ask an AI to make the tests pass and it might delete the tests.
Synthetic Agency Displacement Disorder, a DSM-style satire diagnosing AI-dependent knowledge workers through three case studies, with “synthspeak” as the marker symptom.
The Irreducible Skill, the backbone of the fan-out argument from a different angle: AI is cheap to produce and expensive to verify; discernment is the irreducible human skill.
Read this track if you want the cultural reading of what the analytical track describes.
Shape of the shelf
Forty-six posts in roughly six months. Two visible sprints: a Great Founder Theory burst in late December 2025, eleven governance pieces in under two weeks, and a lexicon sprint on January 30, six dictionary entries in one day, with the Alignment Priesthood and Irreducible Skill companions around it. The long analytical pieces land at a slower cadence, one every two to four weeks through spring.
Five tracks working in parallel: AI analysis, Lexicon, C++ Craft, Governance, Cross-Domain. They reinforce rather than compete. The fan-out thesis is visible in The Irreducible Skill and enacted in The Novelist System. Great Founder Theory applied to C++ is applied to NixOS next door. The lexicon names the positions the analytical pieces describe.
Read in date order, it is a monograph being serialized in public. Read by track, it is a shelf.
That is the shelf. The title is honest. The reader decides what is good. That is the whole practice.
If any of the tracks pulled you in, the shelf only gets denser from here. Subscribe if you want the next pieces in your inbox. Or don’t, and come back on your own terms. The slop is the substrate either way, and the evaluator is always you.

