Open Source After Dependencies (Part III): Where This Breaks, Who Pushes Back, and What a Minimal Path Forward Looks Like
Problem solved or not
After the first two articles, a pattern emerged in the feedback.
People generally agree on the diagnosis:
dependency graphs are out of control
incentives in open source are distorted
LLMs change the economics of reuse
Where things get tense is here:
“This sounds nice in theory, but it ignores how open source actually survives.”
This article is about pressure-testing the idea—by looking at where it breaks, who loses, and what a minimal, non-utopian version could look like.
A Maintainer’s Critique (Steelman, Not Strawman)
Let’s take the strongest possible objection, stated plainly:
“You’re proposing a world where the hard, thankless work of maintaining software is replaced by vague ‘curation’ and ‘ideas’. That’s not how production systems survive.”
This critique is valid.
Maintenance today involves:
deep system knowledge
years of bug archaeology
context no benchmark captures
responsibility when things break at 3am
A generated variant that “passes tests” is not the same as a battle-hardened system.
So here’s the uncomfortable truth:
This model is worse for long-lived, highly stateful, deeply entangled systems.
Databases, distributed consensus systems, kernels—these do not regenerate cleanly. Their value is accumulated scar tissue.
If this paradigm pretends otherwise, it fails.
Where the Model Explicitly Does Not Apply
To be credible, the boundaries must be clear.
This approach is a bad fit for:
distributed databases
consensus algorithms
complex stateful runtimes
systems where correctness is mostly emergent
In those domains:
maintenance is the product
code history matters
intuition beats enumeration
That’s not a flaw. It’s a scope decision.
Where this does apply is the vast middle of software:
glue code
infrastructure edges
protocol adapters
UI components
request lifecycles
build tooling
internal services
In other words: most dependencies.
A Concrete Failure Case
Imagine this scenario:
A team uses an intent-driven HTTP server variant optimized for low latency. It benchmarks beautifully. Six months later, they hit a rare production deadlock under partial network failure.
A traditional library:
has years of issue history
maybe already hit this edge case
has institutional memory
A generated variant:
does not
This is a real risk.
Mitigation is not denial; it’s architecture:
generated code must be small
variants must document failure modes
escape hatches must exist
migration must be cheap
This model trades accumulated robustness for reduced complexity. That trade must be explicit.
UI Is Where This Model Shines (And Why That Matters)
Backend engineers tend to focus on servers. The UI ecosystem is where dependency collapse is most visible.
Today, a button pulls in:
design systems
styling frameworks
accessibility layers
build-time plugins
And yet, UI components are:
mostly finite
highly spec-driven
constrained by standards
visually inspectable
A UI intent might look like:
component: name: Button platform: web accessibility: WCAG-AA theming: css-vars states: - hover - active - disabled constraints: no_runtime_styles: true
The output:
one React component
one CSS file
one a11y checklist
no dependencies
This is not theoretical. This is already how senior teams wish UI worked.
Here, regeneration is a feature, not a risk.
The Real Cultural Resistance
The hardest resistance isn’t technical.
It’s this:
“If everyone can generate implementations, what makes my work valuable?”
Historically, open source rewarded:
being early
being central
being depended on
This model rewards:
being insightful
naming tradeoffs
defining constraints
publishing measurements
That’s a different status economy.
Not everyone will like that shift—and that’s okay. Paradigms don’t replace each other cleanly. They overlap.
A Minimal Spec (No Platform, No Revolution)
To keep this grounded, here’s the smallest possible slice that could exist today.
- Intent File
capability: http-server constraints: latency_p99_ms: <5 memory_mb: <50
- Capability Spec
interface definitions
invariants
test suite
benchmark harness
No code. Just expectations.
- Local Generation
LLM generates vendored code
must pass tests
must expose benchmarks
No registry required. No startup. No network effect.
If this fails to add value at this scale, it deserves to die.
What This Asks From Us
This model asks developers to:
accept more responsibility
think in tradeoffs, not brands
read smaller code instead of trusting big abstractions
That’s not easier. It’s more honest.
And it aligns better with what experienced engineers already do—just without forcing everyone else to inherit their private context.
The Question Beneath All of This
The real question isn’t about LLMs.
It’s this:
Do we want open source to optimize for reuse, or for understanding?
Dependencies maximize reuse. Design spaces maximize understanding.
For a long time, reuse was the scarce resource. LLMs flipped that.
Now understanding is scarce again.
Our tools—and our culture—haven’t caught up yet.
Still Unfinished
If this series has a thesis, it’s not a solution. It’s a warning:
When copying becomes free, the value moves elsewhere.
We can cling to the old shapes of open source and fight the tools—or we can reshape the ecosystem around what humans still uniquely provide.
Judgment. Taste. Direction.
Everything else can be generated.