On Design
Design Elements
The Door
You already know what good design feels like. You have known since childhood.
A well-designed door has a flat plate where you push and a handle where you pull. You never think about it. You walk through. A badly-designed door has identical handles on both sides, and you watch people yank on it three times before they realize they need to push. Don Norman gave these a name. He called them Norman doors, and he built an entire field of study around a single observation: when people fail to use something correctly, the problem is almost never the person. The problem is the design.
“Most people make the mistake of thinking design is what it looks like. People think it’s this veneer - that the designers are handed this box and told, ‘Make it look good!’ That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works.”
-- Steve Jobs, 2003
This paper is about how things work. Not how they look, not how clever they are under the hood, not how many features they have. It is about the practice of making things that serve people - things that a newcomer can pick up and use, that an expert can trust under pressure, and that the next person who reads your code can follow it without a guide.
The examples here come from software, and from C++ in particular. But the principles are older than computing. They apply to doors, to prose, to institutions, and to every artifact that humans make for other humans to use.
Omit Needless Parts
William Strunk Jr. wrote a rule so compressed it almost disappears:
“Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts.”
-- William Strunk Jr., The Elements of Style
A sentence, a drawing, and a machine. Three different things, one principle. Remove what does not earn its place. Antoine de Saint-Exupery said the same thing about airplanes:
“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.”
-- Antoine de Saint-Exupery, Airman’s Odyssey
Dieter Rams spent forty years at Braun distilling this into a single commandment. The last of his ten principles of good design:
“Good design is as little design as possible. Less, but better.”
-- Dieter Rams
Ken Thompson, who created Unix, understood this at a level most programmers never reach:
“One of my most productive days was throwing away 1,000 lines of code.”
-- Ken Thompson
Chuck Moore, the inventor of Forth, made it a discipline. His operating system was 1,000 instructions. His CAD package was 5,000. His mantra was three words: factor, factor, factor - break things into the smallest pieces that make sense, solve the specific problem you have, and never write code for situations that will not arise in practice. He held that code is typically “orders of magnitude too elaborate” for what it actually does.
The instinct to add is natural. The discipline to remove is learned. Every feature you add is a feature someone must learn, a feature someone must maintain, and a feature that can break. The cost of inclusion is permanent. The cost of omission is usually nothing.
Simple is Not Easy
Rich Hickey drew a distinction that most developers have never considered. In his Strange Loop keynote, he separated two words that English lets us confuse:
Simple means one thing. One role, one concept, one responsibility. It comes from the Latin simplex - one fold, one braid. Its opposite is complex: braided together, intertwined.
Easy means nearby. Familiar. Within reach. It is relative to the person. What is easy for you is hard for someone else.
The mistake developers make - the mistake that produces most of the bad software in the world - is choosing easy over simple. They reach for the familiar tool instead of the correct one. They add a quick fix instead of finding the right abstraction. They confuse “I understand this” with “this is well-designed.”
“There are two ways of constructing a software design: one way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”
-- C.A.R. Hoare, 1980 Turing Award Lecture
Hoare is telling you that simplicity is expensive. It requires more thought, not less. It demands that you understand the problem deeply enough to find its essential shape and discard everything else.
“Controlling complexity is the essence of computer programming.”
-- Brian Kernighan, Software Tools
Rob Pike put it differently when explaining why Go deliberately omits features that other languages accumulate. His talk was titled Simplicity is Complicated, and the title is the thesis: making something simple for users requires absorbing complexity yourself. The work does not disappear. It moves from the user to the designer.
That is what design is. It is the act of absorbing complexity so that someone else does not have to.
Start With What People Write
Here is the most important principle in this paper: begin with the code your user will write.
Not the framework. Not the concepts. Not the architecture diagram. The actual line of code at the actual call site. If you cannot write that line first, you do not yet understand the problem. Before proposing any abstraction, implement the use case end-to-end. If you cannot demonstrate working code that a user would actually write, the design is speculative.
Consider what a programmer wants when reading bytes from a network connection:
auto [ec, n] = co_await sock.read_some(buf);One line. A structured binding. An error code and a byte count. The programmer writes their algorithm, not their execution machinery. Compare that with the callback-based alternative it replaced:
socket.async_read_some(buffer,
[&](error_code ec, size_t n) {
if (!ec) {
process(buffer, n);
socket.async_read_some(buffer,
[&](error_code ec, size_t n) {
// deeper and deeper...
});
}
});The logic is identical. The first version is design. The second is what happens when nobody designs the user experience.
Good design follows this pattern. std::from_chars and std::to_chars convert numbers to and from strings. They do not allocate. They do not throw. They do not consult the locale. They take a character range and return a result. They do one thing, and they do it in the way you would write it by hand if you were careful:
char buf[32];
auto [ptr, ec] = std::to_chars(buf, buf + sizeof(buf), 42);No ceremony. No framework. Just the operation.
std::format tells the same story. Victor Zverovich built the {fmt} library, deployed it in production across Blender, PyTorch, MongoDB, and dozens of other projects, and then proposed it for standardization. The standard formalized what had already proven successful in operational use. It checks format strings at compile time. It is type-safe. It is faster than both printf and iostream. It emerged from practice, not theory.
Now consider std::async. You call it expecting to launch work in the background:
std::async(std::launch::async, [] { do_work(); });Surprise: this blocks. The returned std::future destructor waits for the task to complete. Discarding the return value - something that should be harmless - turns asynchronous code into synchronous code. The most natural way to use the API is the wrong way to use it.
Or consider std::regex. The standard imposed no performance requirements. Implementations arrived that were 15 to 40 times slower than PCRE, RE2, or Boost.Regex. The libstdc++ implementation segfaulted on valid patterns for years. This is what happens when a standard specifies behavior without reference to how anyone will actually use it.
Start with the call site. Work backward. Everything else follows.
The Kingdom of Nouns
Steve Yegge wrote a satirical allegory in 2006 about a kingdom ruled by nouns, where verbs - the things that actually do work - were second-class citizens. His target was Java, but the disease is universal. It is the belief that if you add enough layers of abstraction, enough managers managing managers, enough factories building factories, the design will be good.
It will not be good. It will be AbstractSingletonProxyFactoryBean.
“When you go too far up, abstraction-wise, you run out of oxygen. Sometimes, smart thinkers just don’t know when to stop, and they create these absurd, all-encompassing, high-level pictures of the universe that are all good and fine, but don’t actually mean anything at all.”
-- Joel Spolsky, “Don’t Let Architecture Astronauts Scare You”
C++ has its own kingdom of nouns. Consider what happens when you want to inspect the contents of a std::variant:
// You want to do this:
if (v holds an int) { use the int; }
if (v holds a string) { use the string; }
// What C++ makes you write:
std::visit(overloaded{
[](int i) { use(i); },
[](const std::string& s) { use(s); }
}, v);The standard does not provide the overloaded helper. You must write it yourself using variadic templates and parameter pack expansion. As one developer put it:
“It’s completely bonkers to expect the average user to build an overloaded callable object with recursive templates just to see if the thing they’re looking at holds an int or a string.”
Rust solves the same problem with match. C++ solves it by making you construct a noun.
Then there is allocator_arg_t. The idea was to let users pass custom allocators to standard types. The result is viral signature pollution that infects every function in the call chain:
// What the programmer’s algorithm looks like:
task<> serve(socket& sock) {
auto [ec, n] = co_await sock.read_some(buf);
}
// What allocator_arg_t makes it look like:
task<> serve(std::allocator_arg_t, Alloc alloc, socket& sock) {
auto [ec, n] = co_await sock.read_some(
std::allocator_arg, alloc, buf);
}The handler’s purpose is identical. The allocator adds nothing to its logic - it is a cross-cutting concern being threaded through the interface. The pollution compounds through a call chain. The allocator support in std::function was so badly specified that it was removed entirely. GCC never implemented it. libc++ silently ignored the arguments. Three major implementations, three different behaviors, none of them correct.
std::ranges introduced another flavor of the same disease: over-constraint. Consider a simple search:
struct Packet {
int seq_num;
bool operator==(int seq) const { return seq_num == seq; }
};
std::vector<Packet> packets{{1001}, {1002}, {1003}};
auto it = std::ranges::find(packets, 1002); // FAILSPre-ranges std::find handles this. std::ranges::find rejects it because its concept constraints demand that the value type and the comparand share a common reference. The theoretical requirement blocks the practical use case.
When a user must write more code to use the abstraction than to do without it, the abstraction has failed its purpose. Constraints should enable use cases, not obstruct them.
And then there is iostream. Stateful formatting flags that persist across operations. A locale system entangled with every output operation. An operator overload mechanism that generates hundreds of candidates during overload resolution. It has been called hopelessly broken, and the description is accurate. The library tried to serve every use case and served none of them well.
The antidote to the kingdom of nouns is to ask one question: what does the user want to do? Start there. Everything that does not serve that answer is ceremony.
The Wrong Abstraction
Sandi Metz identified a pattern that every experienced developer has encountered but few have named:
“Duplication is far cheaper than the wrong abstraction.”
-- Sandi Metz, “The Wrong Abstraction”
Here is the cycle. A programmer sees duplicated code and extracts it into a shared function. Time passes. New requirements arrive that are almost the same. Rather than reconsidering the abstraction, developers add parameters and conditional logic. More requirements, more parameters. Eventually the shared function is a thicket of if statements that nobody dares touch, because the sunk cost of the abstraction has made it feel permanent. The abstraction was wrong, and the team kept paying for it.
Fred Brooks drew the deeper line:
“The hard part of building software is the specification, design, and testing of this conceptual construct, not the labor of representing it.”
-- Fred Brooks, “No Silver Bullet”
Brooks distinguished essential complexity - the irreducible difficulty of the problem itself - from accidental complexity - the difficulty we create through our tools and processes. Good design reduces accidental complexity. Bad design adds it.
std::filesystem::path on Windows is a study in accidental complexity. The string() member function converts the path through the system’s Active Code Page. If the path contains Unicode characters - and in 2025, of course it does - the conversion silently produces mojibake. A filename in Belarusian, Chinese, or Arabic becomes garbage. The function does not fail. It does not throw. It returns corrupted data and moves on. P2319R2 proposes deprecating it. A function that silently corrupts data is worse than a function that crashes. At least a crash tells you something is wrong.
C++11’s “uniform initialization” was designed to unify the syntax for creating objects. It did the opposite:
std::vector<int> a(4); // 4 elements, all zero
std::vector<int> b{4}; // 1 element, the value 4The braces look uniform. The behavior is not. The compiler prefers initializer_list constructors over all others, and the result is a syntax that is neither uniform nor predictable. The abstraction promised simplicity and delivered surprise.
Bjarne Stroustrup himself invoked the Vasa - a seventeenth-century Swedish warship that capsized on its maiden voyage because the king kept demanding more cannons on higher decks. Each feature was reasonable in isolation. Together, Stroustrup warned, “they are insanity to the point of endangering the future of C++.”
The antidote is std::span. It replaces the ancient (pointer, size) pair with a lightweight, non-owning view of contiguous memory. It does not allocate. It does not own. It does exactly one thing. It is the right abstraction because it matches the shape of the problem exactly, with nothing left over.
Deep Modules
John Ousterhout’s A Philosophy of Software Design offers a visual model. A module has an interface (its top surface) and an implementation (its depth). A deep module has a small interface and a large implementation. A shallow module has a large interface and a small implementation.
Deep modules are good. They hide complexity behind simplicity. Unix file I/O is the canonical example: five functions - open, close, read, write, lseek - hide directory management, permission checks, disk scheduling, caching, and filesystem independence. The interface is tiny. The machinery is vast.
Design is not about accepting the constraints the implementation imposes on users. Design is about absorbing those constraints so users don’t have to.
std::shared_ptr is a deep module. The interface is small: create it, copy it, use it, let it go. Behind that interface lives a control block that tracks both strong and weak reference counts, supports custom deleters, enables aliasing constructors, and with std::make_shared, allocates the object and the control block in a single memory operation for efficiency and exception safety. None of this complexity leaks through the interface. You do not need to understand control blocks to use a shared_ptr. That is depth.
std::unique_ptr achieves something even more remarkable: zero overhead. When the deleter is stateless - and it almost always is - the empty base optimization eliminates its storage entirely. The compiled result is identical to a raw pointer with a manual delete. The safety is free. The abstraction costs nothing. This is what Stroustrup meant by the zero-overhead principle: what you don’t use, you don’t pay for.
Howard Hinnant’s std::chrono makes an entire category of bugs impossible. Mixing seconds and milliseconds is not a runtime error that you catch in testing. It is a type error that the compiler rejects before your code runs. The design makes efficient code convenient and inefficient code inconvenient. Date literals like 2016y/may/29 are self-documenting. The depth is enormous - calendrical calculations, leap seconds, time zone databases - but the surface is clean.
std::optional with its C++23 monadic operations follows the same instinct. transform, and_then, or_else let you chain operations that might not produce a value, and the library handles the empty case for you. The value path is clean. The error path requires more typing. As it should be: the type “skews towards behaving like a T” because its intended use is when the expected value is contained.
The sans-I/O philosophy applies the same principle to protocol libraries. A sans-I/O parser is a state machine that consumes buffers and produces events. It does not read from sockets. It does not manage connections. It does not know what I/O runtime you use. You call functions, feed it bytes, and it tells you what it found. The result is a protocol implementation that can be tested with simple function calls, deterministically, with no network, no threads, and no timing dependencies. The depth is in the protocol logic. The interface is buffers in, events out.
Now consider std::thread. If you forget to call join() or detach() before destruction, the destructor calls std::terminate() and your program dies. It took nine years and a separate proposal to produce std::jthread, which joins automatically. The original std::thread was described in N2802 as “possibly the most dangerous feature being added to C++0x.” A deep module absorbs decisions. A shallow module forces them on the user and punishes mistakes with termination.
Design for Composition
Alexander Stepanov saw something in the late 1970s that changed how we think about libraries:
“Some algorithms depended not on some particular implementation of a data structure but only on a few fundamental semantic properties of the structure... Most algorithms can be abstracted away from a particular implementation in such a way that efficiency is not lost.”
-- Alexander Stepanov, Dr. Dobb’s Interview
The Standard Template Library is built on this insight. std::sort does not know about std::vector. It knows about random-access iterators. std::find does not know about std::list. It knows about forward iterators. The algorithms are parameterized on concepts - the minimal set of operations they need - not on concrete types. Sixty algorithms compose with any container that provides the right iterators.
Stepanov insisted that complexity guarantees are part of the interface: “You cannot have interchangeable modules unless these modules share similar complexity behavior.” A stack that takes linear time to push is not a stack. The concept includes the performance contract.
Buffer sequences demonstrate composition in practice. Instead of accepting std::span<const std::byte> - a concrete type that forces a single contiguous buffer - an I/O function can accept a buffer sequence: any type that produces a range of memory regions. The result is zero-allocation composition:
auto combined = buffer_cat(header_buffers, body_buffers);
co_await sock.write(combined); // single writev() callNo copying. No allocation. Heterogeneous inputs - a fixed header and a dynamic body - compose into a single scatter-gather I/O operation. The span-fixated designer asks “what type should I accept?” The concept-aware designer asks “what operations does my function need to perform on its argument?”
A ReadStream concept captures the essential operation: anything you can read_some from. TCP sockets, TLS streams, file handles, in-memory buffers - one generic algorithm works with all of them:
template<ReadStream Stream>
task<> read_all(Stream& s, char* buf, std::size_t size) {
std::size_t total = 0;
while (total < size) {
auto [ec, n] = co_await s.read_some(
mutable_buffer(buf + total, size - total));
if (ec)
co_return;
total += n;
}
}This is what composition looks like: generic algorithms, minimal requirements, maximum reuse.
But composition has a cost. When the abstraction layer itself becomes the bottleneck, it has failed. Google bans std::ranges from most of its codebase. The reasons are concrete: abnormal binary bloat, cubic stack growth with nested adapters, compile times that slow by a factor of eight. The abstraction is elegant in theory. In practice, the cost exceeds the benefit. Composition that cannot be deployed is not composition. It is poetry.
An all-powerful abstraction is a meaningless one. The abstractions that succeed are narrow. Iterators abstract over traversal. RAII abstracts over resource lifetime. Allocators abstract over memory strategy. Each one captures a single essential property and leaves everything else alone. The wide abstractions - the ones that try to unify scheduling, context propagation, error handling, cancellation, algorithm dispatch, and hardware backend selection into a single framework - those are the ones that collapse under their own weight.
Ship the Boat, Not the Blueprints
TCP/IP did not win because it was better designed than the OSI model. By most theoretical measures, OSI was more complete, more layered, more carefully specified. TCP/IP won because it was running. The IETF’s motto - “rough consensus and running code” - is not a concession to imperfection. It is a design philosophy. The Internet’s architecture, RFC 1958 explains, “grew in evolutionary fashion from modest beginnings, rather than from a Grand Plan.”
Richard Gabriel named this principle Worse is Better. The New Jersey approach - Unix, C, TCP/IP - prioritizes simplicity of implementation. The MIT approach - Lisp, OSI, theoretically complete systems - prioritizes correctness and consistency. Gabriel’s uncomfortable observation is that worse-is-better software has “better survival characteristics.” Simpler implementations ship sooner, port easier, and spread faster. Gabriel called Unix and C “the ultimate computer viruses.”
std::format was shipped right. Victor Zverovich built the {fmt} library, proved it in production, let the ecosystem validate the design, and then standardized it. It arrived complete: format strings, type safety, extensibility, performance. Users could use it on day one.
C++20 coroutines were shipped wrong. The language feature - co_await, co_yield, co_return - arrived without std::generator, without a task type, without a scheduler. The machinery was there. The boat was not. It took three years for std::generator to arrive in C++23. The task type is still missing. Users spent those years writing their own, incompatibly.
The pattern of “ship machinery in C++N, ship usable types in C++N+3” should be recognized as an anti-pattern and rejected. Any proposal that introduces language machinery must also include standard library types that make the machinery immediately usable.
std::execution repeats the mistake at larger scale. It ships without a thread pool. It ships without a task type. The argument is that “the ecosystem will provide implementations.” But standardization exists precisely to solve the problem that the ecosystem cannot: vocabulary types that enable interoperability between libraries. Shipping a framework without its primitives is like selling a kitchen without a stove and telling the buyer that the restaurant industry will provide one.
Teach What You Build
Christopher Alexander spent decades trying to name something he could see but not define. He called it the quality without a name - an aliveness in certain buildings that makes them feel whole, human, and right. He could not capture it in a formula. But he could surround it with patterns: recurring solutions to recurring problems that, when combined thoughtfully, produce spaces where people thrive.
Software has the same quality. Some libraries feel right. You read the documentation, you try an example, and it works the way you expected before you knew what to expect. Other libraries make you fight.
Alan Kay articulated the standard:
“Simple things should be simple, complex things should be possible.”
-- Alan Kay
Kay later invoked this principle when discussing the iPhone with Steve Jobs. The iPhone makes simple things simple, Kay observed, but it makes complex things impossible. That is only half the design.
The test of teachability is progressive disclosure. The beginner sees the simple surface. The intermediate user discovers composition. The expert pops the hood and finds clean machinery underneath. A library that requires understanding the machinery before you can use the surface has inverted the learning curve.
The motivating examples become the documentation. If your design is correct, the use cases that drove it are also the tutorials that teach it. A design that requires extensive prerequisite explanation before the user can write their first line of code is a design that put the framework before the use case.
Think about a legal contract between two parties. A homeowner hires a contractor to build a deck. The contract states that the homeowner will provide the lumber and a clear site, and the contractor will build a structurally sound deck by a certain date. Software contract programming works the same way. The metaphor explains the concept because the design mirrors how people already think.
This is not a coincidence. When design follows the shape of human thought, it barely needs explanation.
The Implementation Confidence Gap
There is a fundamental asymmetry in programming. Implementation success is verifiable in minutes. You write code, you compile it, it runs, the test passes. The feedback loop is tight and rewarding. Design success may not be verifiable for years. A poorly-designed API might work fine until the third team tries to extend it. A leaky abstraction might hold until the system scales. By the time the design fails, the designer has moved on and the failure looks like someone else’s problem.
A programmer who rapidly produces a working feature experiences fluency. That same programmer, asked to justify their abstraction choices, explain their interface decisions, or anticipate how their design will evolve, often reveals that fluency did not require deep understanding. Implementation skill and design skill are different things. The first is common. The second is rare. And the constant reinforcement of the first creates a false confidence about the second.
This gap manifests in predictable ways. Interface proliferation: functions that mirror the implementation’s structure rather than the user’s mental model. Abstraction avoidance: dismissing necessary generalization as “over-engineering.” Abstraction proliferation: adding layers without purpose. Refusal to iterate: “it works, why change it?”
Functional institutions are the exception, not the rule. Creating a functional institution requires a founder who knows how to coordinate people to achieve the institution’s purpose. The succession problem - transferring both power and skill to the next generation - is the hardest problem in any organization. When the transfer fails, what remains is form without function: people following processes they do not understand, reproducing patterns whose purpose has been forgotten.
Knowledge comes in two forms. Living knowledge is understood, transferable, and extensible. Dead knowledge is form reproduced without comprehension - processes followed because “that’s how we’ve always done it,” code patterns copied without understanding why they exist.
“Once that tradition is lost, you are making photocopies of photocopies. Each subsequent copy loses information.”
The price of reliability is the pursuit of the utmost simplicity. Not because simplicity is easy, but because it is the only thing that survives transmission.
Judgment
The irreducible skill in design is judgment. Not knowledge, not experience, not pattern recognition - judgment. The ability to look at two reasonable approaches and choose the one that will serve users better five years from now.
Experience is a powerful tool when it produces curiosity. A person who has seen a technique fail in other contexts and says “let me look carefully at how this specific design avoids those failure modes” is using experience well. A person who has seen a technique fail and says “therefore this must be wrong too” has let experience replace analysis. The label was recognized. The mechanism was not examined. That is not engineering judgment. It is pattern-matching.
“We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.”
-- Donald Knuth, “Structured Programming with go to Statements”
Knuth is modeling judgment. Not “never optimize” and not “always optimize.” Know which 3% matters. That requires measurement, not intuition. It requires understanding, not reflex.
The skilled designer asks both questions. They design for composition and usability. They understand concepts and know when concrete types are appropriate. They can explain why a design is abstract and demonstrate that it serves real use cases simply. The extremes are easy. Span for everything. Concepts for everything. The middle ground - abstract enough to enable composition, concrete enough to be usable, grounded in real use cases and not theoretical purity - that is where good design lives, and finding it requires judgment that no rule can replace.
Good design lives in the middle. It is abstract enough to compose, concrete enough to use, and grounded firmly enough in reality that the people who inherit it can understand why every decision was made.
The Quality Without a Name
We come back to Alexander. The quality without a name. The thing you recognize in a well-designed tool before you can articulate what makes it good.
It is the feeling of using std::unique_ptr for the first time and realizing that the compiler is managing your resource lifetime and it costs you nothing. It is the moment you write auto [ec, n] = co_await sock.read_some(buf) and think: this is just reading from a socket, the way it should always have been.
That quality does not come from cleverness. It comes from care. From someone who sat with the problem long enough to find its essential shape. From someone who removed everything that did not serve the user. From someone who tested the design against reality instead of defending it against criticism.
Every line of code you write is a letter to someone you will never meet. A future developer, a future maintainer, a future user. They will not know your name. They will not read your design documents. They will read your interfaces. They will feel the weight of your decisions in the ease or difficulty of their daily work.
“Good design is thorough down to the last detail. Nothing is arbitrary or left to chance. Care and accuracy in the design process show respect for the user.”
-- Dieter Rams
Traditions of knowledge are preserved intentionally. It is hard to keep a tradition of knowledge alive. The people who built Unix, who designed the STL, who created the zero-overhead principle, who proved that simple implementations survive while grand plans do not - they left us more than code. They left us a way of thinking. An approach to problems that prizes clarity over cleverness, composition over accumulation, users over architectures.
That tradition is worth protecting. Not by freezing it in place, but by understanding it deeply enough to extend it. By building things that are simple enough to teach, correct enough to trust, and small enough to understand. By absorbing complexity so that the next person who touches your work finds something that makes sense.
The quality without a name is not a mystery. It is the result of caring enough to do the work.
Build things that matter. Build them simply. Build them well.
References
Steve Jobs. “The Guts of a New Machine.” The New York Times Magazine, 2003.
Don Norman. The Design of Everyday Things. Basic Books, 1988.
William Strunk Jr. The Elements of Style.
Antoine de Saint-Exupery. Airman’s Odyssey.
Dieter Rams. “Ten Principles of Good Design.”
Ken Thompson. Quoted in The Art of Unix Programming by Eric S. Raymond.
Chuck Moore. “Factoring in Forth.” UltraTechnology.
Rich Hickey. “Simple Made Easy.” Strange Loop Conference, 2011.
C.A.R. Hoare. “The Emperor’s Old Clothes.” ACM Turing Award Lecture, 1980.
Brian Kernighan and P.J. Plauger. Software Tools. Addison-Wesley, 1976.
Rob Pike. “Simplicity is Complicated.” dotGo, 2015.
Steve Yegge. “Execution in the Kingdom of Nouns.” 2006.
Joel Spolsky. “Don’t Let Architecture Astronauts Scare You.” 2001.
Matt Stancliff. “std::visit is Everything Wrong with Modern C++.” Bit Bashing.
P0302R1. “Removing Allocator Support in std::function.” WG21, 2016.
Sandi Metz. “The Wrong Abstraction.” 2016.
Fred Brooks. “No Silver Bullet.” IEEE Computer, 1986.
Microsoft STL Issue #909. “Prevent filesystem::path dangerous conversions.”
P2319R2. “Prevent path presentation problems.” WG21, 2024.
Bjarne Stroustrup. “What’s All the C Plus Fuss?” Columbia University, 2018.
John Ousterhout. A Philosophy of Software Design. Yaknyam Press, 2018.
Alexander Stepanov. “Al Stevens Interviews Alex Stepanov.” Dr. Dobb’s Journal, 1995.
RFC 1958. “Architectural Principles of the Internet.” 1996.
Richard Gabriel. “Worse is Better.” 1989.
Victor Zverovich. {fmt} library.
N2802. “A Plea to Reconsider Detach-on-Destruction for Thread Objects.” WG21, 2008.
Howard Hinnant. “A chrono Tutorial.” CppCon, 2016.
P0798R8. “Monadic operations for std::optional.” WG21, 2021.
Donald Knuth. “Structured Programming with go to Statements.” ACM Computing Surveys, 1974.
Christopher Alexander. The Timeless Way of Building. Oxford University Press, 1979.
Alan Kay. “Simple things should be simple, complex things should be possible.”
N4412. “Shortcomings of iostreams.” WG21, 2015.
Ash Vardanian. “The Painful Pitfalls of C++ STL Strings.” 2024.


Excellent. You should publish.