Open Source After Dependencies (Part II): Design Spaces, Incentives, and What Comes Next
What's the practical plan to intent driven development?
In the first article, I tried to reframe open source away from dependencies and toward intent, variants, and selection. The response I got—privately more than publicly—was consistent:
“Interesting, but this feels abstract. How would this actually work day to day? And what happens to maintainers?”
This article is an attempt to answer those questions more concretely, without pretending the trade-offs don’t exist.
Seeing the Difference: Libraries vs Design Space
Today, most technical choices collapse into a single axis:
Which library do we pick?
This flattens a multidimensional problem into a popularity contest.
A more honest mental model looks like this:
axes: latency, memory, debuggability, startup time, portability, failure behavior
points: concrete implementations
boundaries: hard constraints imposed by the product
A library is a single point that pretends to be the whole space.
An intent-driven system exposes the space itself.
Visually (simplified):
debuggability ↑ | variant B | variant C | latency -----------+------------→ throughput | variant A | |
The important part isn’t picking “the best” point. It’s seeing what tradeoffs exist at all.
That visibility is what open source used to give us—and what dependency abstraction quietly took away.
A Concrete Walkthrough: intent explore http-server
Imagine a small CLI. Not a framework. Not a platform. Just a thin layer around code generation, testing, and measurement.
intent explore http-server
--latency-p99 <5ms
--memory <50mb
--concurrency 20000
--variants 5
What happens next is deliberately boring:
-
The tool resolves a shared capability spec (interfaces, invariants, tests).
-
An LLM generates multiple implementations with enforced dissimilarity.
-
Each implementation is vendored locally.
-
Benchmarks and stress tests run automatically.
-
A summary is produced:
Variant A: lowest latency, poor debuggability Variant B: stable under load, higher memory Variant C: experimental, best throughput, fragile
At no point did you “install” anything. At no point did you trust a maintainer roadmap. At no point did you lose ownership of the result.
You didn’t outsource the decision—you augmented it.
“But Who Maintains This?” — A Maintainer’s Counterargument
This is the strongest objection, and it deserves to be taken seriously.
Today, maintainers:
fix bugs
review PRs
manage releases
absorb ecosystem churn
act as human routers for user intent
In an intent-first world, much of that disappears.
That sounds threatening—and it is, if we assume maintenance is the core value.
But historically, the most influential maintainers didn’t win because they fixed bugs faster. They won because they:
defined clean abstractions
named problems correctly
made good tradeoffs visible
articulated constraints others hadn’t seen yet
Those roles don’t go away. They become clearer.
Maintainers shift from:
caretakers of codebases
to:
curators of design knowledge
That’s a loss of control—but arguably a gain in intellectual leverage.
Incentives: Why This Might Actually Be More Sustainable
The Tailwind situation made something explicit: traffic and attention, not code, were the scarce resources.
That incentive model pushes projects toward:
centralization
ecosystem gravity
paid extensions
subtle lock-in
An intent/variant model weakens those levers:
no single implementation owns the funnel
ideas propagate independently of brand
monetization shifts toward services, audits, and expertise
This is worse for venture-scale open source. It may be better for the ecosystem.
Not every problem needs a company. Some need a map.
Fragmentation Is a Feature (If You Fragment the Right Thing)
One fear keeps coming up:
“Won’t this fragment everything?”
Yes—but at the implementation level, not the interface level.
Today we fragment:
APIs
mental models
conceptual boundaries
And then we pretend npm solves that.
In this model:
interfaces are shared
tests are shared
benchmarks are shared
implementations compete openly
That’s closer to how CPUs, databases, and compilers evolved—and those ecosystems are stronger for it.
Where New Ideas Actually Enter the System
This is the part I care about most.
Novelty does not come from generation. It comes from reframing constraints.
Examples:
“What if we optimize for failure recovery, not throughput?”
“What if startup time is more important than steady-state?”
“What if we eliminate abstraction layers entirely?”
These ideas don’t come from LLMs. They come from humans being uncomfortable.
The system’s job is not to invent those ideas. It’s to make them cheap to explore once they exist.
The Real Question This Raises
If this model works—even partially—it changes what it means to “share” in open source.
You don’t primarily share:
code
repos
packages
You share:
constraints
measurements
failed paths
named tradeoffs
That’s less glamorous. It’s harder to monetize. It’s also closer to knowledge than to product.
This Is Still Unfinished
This isn’t a call to burn npm or rewrite everything.
It’s a thought experiment grounded in a practical observation:
LLMs make copying cheap again. That forces us to decide what was valuable before copying was expensive.
If the answer is “ideas and judgment,” then our tools—and our open source culture—should reflect that.
The rest is iteration.