A Case for Standard Networking and Third-Party Parallelism
Great Founder Theory in C++
Abstract
This paper argues that WG21 made a strategic error in prioritizing P2300 (std::execution/senders-receivers) over the Networking TS, and that C++ would be better served by standardizing networking while leaving heterogeneous parallel execution to third-party libraries. We draw on the Python ecosystem’s successful separation of core language facilities from domain-specific libraries (TensorFlow, PyTorch), analyze C++’s pathological aversion to dependencies through the lens of Great Founder Theory, and argue that this cultural dysfunction inhibits the recursive composition of abstractions that enables software ecosystem scaling.
1. Introduction
The C++ standards committee spent over a decade attempting to standardize “executors”—a unified abstraction for asynchronous and parallel execution. The resulting P2300 (”std::execution”) prioritizes heterogeneous computing (GPUs, distributed systems) over networking, despite networking being the more universal requirement. We argue this inversion of priorities reflects institutional capture by corporate interests (NVIDIA, Meta) rather than sound technical judgment about what belongs in a language standard library.
Our thesis is twofold: (1) networking should have been standardized because it represents infrastructure-level functionality that virtually all networked applications require, while heterogeneous parallelism is a specialized domain better served by dedicated libraries; and (2) C++’s cultural resistance to third-party dependencies is itself a symptom of institutional decay that prevents the ecosystem from scaling.
2. The Networking Case
2.1 Universality of Need
Networking is infrastructure. Every web server, every database client, every distributed system, every IoT device, every cloud application requires network I/O. The Networking TS, derived from Boost.Asio, represents over two decades of production deployment across thousands of applications. Chris Kohlhoff’s design has been battle-tested in contexts ranging from high-frequency trading to embedded systems.
By contrast, heterogeneous parallel computing—while important—serves a narrower constituency. GPU programming, distributed machine learning, and HPC workloads are specialized domains with specialized requirements. The users who need these capabilities are precisely the users equipped to integrate specialized libraries.
2.2 The Stability Argument
Boost.Asio’s async model has remained largely stable since its introduction. Applications written against it fifteen years ago continue to function. This stability reflects design maturity—the problem space is well-understood, the abstraction boundaries are clear, and the failure modes are documented.
P2300, by contrast, underwent ten major revisions before adoption. Its customization mechanism changed from tag_invoke to member functions (P2855). Algorithms like ensure_started and start_detached were removed entirely. The “environments” and “queryable” concepts went through multiple redesigns. This churn indicates a design still finding its footing—precisely the wrong characteristic for standard library inclusion.
2.3 The Completeness Argument
The Networking TS ships as a complete, usable facility. A developer can write a TCP server using only the TS’s abstractions. P2300, by contrast, ships without a thread pool, without a coroutine task type, and with a “paltry set” of algorithms. As P3109 acknowledges, users “will need to go to third party libraries for thread-pools, or write their own.” The irony is acute: the proposal justified by avoiding third-party dependencies requires third-party dependencies to be useful.
3. The Python Precedent
3.1 TensorFlow and PyTorch
Python’s machine learning ecosystem provides an instructive counter-example to C++’s standardization impulse. TensorFlow and PyTorch—the dominant frameworks for deep learning—are not part of Python’s standard library. They never will be. Yet they have achieved de facto standardization through adoption, with stable APIs, extensive documentation, and ecosystem support that exceeds most standard library facilities.
This separation works because:
Rapid evolution: ML frameworks iterate faster than language standards can accommodate. TensorFlow 1.x to 2.x represented a fundamental API redesign that would have been impossible within Python’s backward-compatibility constraints.
Domain expertise: The TensorFlow and PyTorch teams possess specialized knowledge about GPU programming, automatic differentiation, and distributed training that the Python core developers do not. Embedding this expertise in the standard library would require either hiring these experts or accepting inferior designs.
Competition enables innovation: PyTorch emerged as a competitor to TensorFlow and eventually surpassed it in research adoption. If TensorFlow had been standardized, this competitive pressure—and the resulting innovation—would not have occurred.
Installation is trivial:
pip install torchtakes seconds. The “dependency” is not a meaningful burden.
3.2 NumPy: The Counter-Counter-Example
One might argue that NumPy represents a successful standardization of array computing within the Python ecosystem—it is ubiquitous and stable. But NumPy is not in Python’s standard library. It achieved its status through adoption, not standardization. The Python core developers wisely recognized that array computing, while important, was not infrastructure in the same sense as os, sys, or socket.
The correct analogy for C++ would be: networking (like Python’s socket module) belongs in the standard; parallel computing (like NumPy/PyTorch) belongs in the ecosystem.
4. Great Founder Theory and Institutional Dysfunction
4.1 The Absence of Ecosystem Founders
Great Founder Theory posits that functional institutions trace back to skilled founders who solve coordination problems through social technology design. C++’s ecosystem suffers from a founder vacuum in package management and dependency resolution.
Python has pip and PyPI, maintained by the Python Packaging Authority. Rust has cargo and crates.io, designed by the Rust team from the language’s inception. JavaScript has npm, created by Isaac Schlueter. Each of these represents a founder’s vision for how software composition should work, embedded in tooling that makes dependencies trivial.
C++ has... nothing comparable. CMake is a build system, not a package manager. Conan and vcpkg are third-party efforts that lack ecosystem-wide adoption. The result is that adding a dependency in C++ requires:
Choosing a package manager (or not using one)
Configuring build system integration
Managing ABI compatibility across compiler versions
Handling transitive dependencies manually
Often, vendoring source code directly
This friction is not inherent to C++ as a language. It reflects the absence of founding vision for how C++ projects should compose. Bjarne Stroustrup founded C++ the language; no one founded C++ the ecosystem.
4.2 Lost Knowledge: The Cost of No Package Manager
The absence of frictionless dependency management has created a cultural adaptation: C++ developers avoid dependencies. This adaptation is rational given the tooling environment, but it produces pathological outcomes at the ecosystem level.
Consider: a C++ developer who needs JSON parsing will often write their own parser (or copy-paste code) rather than depend on nlohmann/json. A developer who needs HTTP client functionality will wrap libcurl with custom code rather than use a higher-level library. Each of these decisions is locally reasonable but globally destructive.
The accumulated effect is that C++ libraries tend to be “leaf” libraries—they depend on almost nothing and provide relatively low-level abstractions. Libraries that would depend on other libraries face prohibitive integration costs, so they either don’t exist or reimplement their dependencies internally. This prevents the recursive composition that enables abstraction towers.
4.3 Borrowed Power vs. Owned Power in the Standard Library
In GFT terms, the C++ standard library represents “borrowed power”—authority derived from formal position rather than intrinsic capability. When WG21 standardizes a facility, that facility gains legitimacy from its ISO imprimatur, not from demonstrated superiority.
Third-party libraries, by contrast, must earn “owned power” through adoption. Boost.Asio achieved dominance because developers chose it, not because a committee mandated it. This market test provides information that standardization processes cannot: which designs actually work in practice, under competitive pressure.
By standardizing P2300 before it has achieved Boost.Asio-level adoption, WG21 is attempting to confer borrowed power on a design that has not earned owned power. The risk is that the standard enshrines an inferior or premature design, crowding out potentially superior alternatives.
5. The Scaling Problem
5.1 Why Abstractions Must Compose
Software engineering scales through abstraction composition. A web framework composes HTTP handling, routing, middleware, and templating. A machine learning pipeline composes data loading, preprocessing, model training, and deployment. Each layer builds on lower layers, enabling developers to work at progressively higher levels of abstraction.
This composition requires that lower-level components be easily integrated. If adding a dependency costs hours of configuration, developers will not compose—they will reimplement or go without. The result is that C++ software tends to be “flat”: applications depend directly on low-level primitives rather than intermediate abstractions.
5.2 The C++ Abstraction Ceiling
Consider the abstraction levels available in different ecosystems:
Python web development:
Level 0: socket
Level 1: http.client/server
Level 2: requests/urllib3
Level 3: Flask/FastAPI
Level 4: Django
Level 5: Platforms built on Django
C++ web development:
Level 0: BSD sockets
Level 1: Boost.Asio (or raw epoll/IOCP)
Level 2: Beast (HTTP on Asio)
Level 3: ... (sparse options, each with integration costs)
Level 4: ... (virtually nonexistent)
The C++ stack “tops out” at a much lower level because each abstraction layer imposes dependency costs that compound. A Level 4 framework would need to depend on a Level 3 framework, which depends on Level 2, and so on. In Python, this is trivial. In C++, it is prohibitive.
5.3 How Standards Inhibit Rather Than Enable
Counterintuitively, standardizing functionality can worsen the scaling problem. Once std::execution exists, library authors face a choice: depend on the standard facility (which may not suit their needs) or provide an alternative (which users must then choose between). The standard creates a Schelling point that may not be the optimal coordination point.
Moreover, standard facilities evolve slowly. P2300 in C++26 will not change significantly until C++29 at earliest. If the design proves problematic, users are stuck with it. Third-party libraries can iterate, fork, or be replaced. Standard libraries calcify.
6. What Should Have Been Standardized
6.1 Networking as Infrastructure
The Networking TS should have been standardized in C++20 or C++23. Its design was stable, its implementation was mature, and its use case was universal. The executor model it employed was sufficient for I/O-bound workloads, which constitute the vast majority of async programming.
Arguments that the Networking TS’s executor model was “inadequate for heterogeneous computing” are true but irrelevant. Networking doesn’t need heterogeneous computing. A TCP server does not run on GPUs. The demand that networking wait for a unified async model that also serves CUDA was a category error that cost the C++ community a decade.
6.2 Parallel Computing as Ecosystem
Heterogeneous parallel computing should have remained in the ecosystem. Libraries like:
HPX (High Performance ParallelX)
Kokkos (Sandia National Labs)
RAJA (Lawrence Livermore)
Intel TBB (Threading Building Blocks)
NVIDIA Thrust/CUB
These represent decades of domain expertise, iterated in response to actual user needs. They evolve faster than standards processes permit. They can target specific hardware (CUDA, ROCm, SYCL) without the abstraction overhead that standardization imposes.
P2300/libunifex/stdexec could have joined this ecosystem as another option—perhaps eventually achieving PyTorch-like de facto standardization through adoption. Instead, it has been enshrined in the standard before earning that status, potentially crowding out alternatives.
6.3 The Package Management Imperative
The actual infrastructure C++ needs is not more standard library facilities—it is frictionless dependency management. If conan install stdexec or vcpkg install p2300 were as trivial as pip install torch, the question of what belongs in the standard would be far less consequential.
WG21 cannot directly solve this problem (package management is not a language standard issue), but the committee’s prioritization reflects a cultural assumption that the standard library must provide everything because dependencies are too painful. This assumption is self-fulfilling: by stuffing the standard library with facilities that belong in the ecosystem, the committee reduces pressure to fix the actual problem (tooling) while bloating the standard with inevitably-obsolescent designs.
7. The Cultural Dysfunction
7.1 “Not Invented Here” as Institutional Norm
C++ culture venerates self-sufficiency. The “good” C++ developer understands everything from assembly to templates, writes their own allocators, and depends on nothing they haven’t read the source code for. This culture has virtues—C++ developers often possess deep systems knowledge—but it produces dysfunctional ecosystem dynamics.
When every project reinvents foundational abstractions, effort is wasted, bugs are duplicated, and interoperability suffers. The C++ community has written hundreds of JSON parsers, string formatting libraries, and command-line argument handlers, each slightly incompatible with the others. This is not a sign of ecosystem health; it is a sign of coordination failure.
7.2 The Fear of Version Conflicts
The stated reason for avoiding dependencies is usually “dependency hell”—the fear that transitive dependencies will conflict, that ABI breaks will cause silent corruption, that updates will break builds. These fears are legitimate given current tooling, but they are symptoms of the tooling problem, not arguments against dependencies per se.
Python had dependency hell too, before virtual environments and pip freeze. Rust had it before Cargo’s lockfiles. The solution was better tooling, not fewer dependencies. C++ has chosen the opposite path: fewer dependencies as a substitute for better tooling.
7.3 The Committee’s Role in Perpetuating Dysfunction
WG21’s willingness to standardize ever more facilities implicitly validates the cultural assumption that dependencies are unacceptable. Each addition to <algorithm>, <ranges>, <execution>, <expected>, <format>, <chrono> says: “The standard library should provide this because you cannot be expected to depend on a third-party library for it.”
This is the opposite of the message the committee should send. A healthier message would be: “The standard library provides primitive infrastructure; the ecosystem provides everything else; here are the tools that make ecosystem integration trivial.”
8. Conclusion
The standardization of P2300 over the Networking TS represents a strategic error driven by corporate capture (NVIDIA/Meta’s parallel computing interests trumping universal networking needs), cultural dysfunction (the assumption that dependencies are unacceptable), and institutional sclerosis (the inability to ship stable, complete facilities in reasonable timeframes).
The correct path would have been:
Standardize the Networking TS in C++20/23 as infrastructure-level functionality
Leave heterogeneous parallel execution to ecosystem libraries (HPX, Kokkos, stdexec-as-third-party)
Prioritize tooling improvements that make dependencies trivial
Allow market competition to determine which parallel execution model deserves eventual standardization
Instead, C++ has standardized a facility that ships incomplete, serves a specialized constituency, and has not earned ecosystem validation through adoption. Meanwhile, networking—the more universal need—remains non-standard after twenty years of effort.
The deeper lesson is that C++’s ecosystem dysfunction cannot be solved by expanding the standard library. It can only be solved by building the social and technical infrastructure that enables library composition. Until that happens, C++ will remain an ecosystem of leaf nodes, unable to grow the abstraction towers that modern software requires.
References
P2300R10: std::execution
N4771: Networking TS
P2464R0: Ruminations on networking and executors
P2469R0: Response to P2464
P3109R0: A plan for std::execution for C++26
Burja, S. (2020). Great Founder Theory.
The authors acknowledge that reasonable people disagree on these matters, and that the P2300 authors worked diligently on a difficult problem. Our critique is of institutional dynamics, not individual efforts.

