Asher Cohen
Back to posts

Open Source After Dependencies: Rethinking Freedom in an LLM-Native World

A future where maintenance of dependencies shares control

For most of my career, “open source” meant freedom.

Freedom to read code. Freedom to change it. Freedom to build on top of other people’s work.

Somewhere along the way, that freedom quietly turned into a different kind of dependency.

Today, a trivial product relies on hundreds—sometimes thousands—of indirect dependencies. Each one represents decisions I didn’t make, tradeoffs I didn’t evaluate, and maintenance costs I silently accepted. Bundlers, frameworks, UI kits, and “best practices” don’t just help us move faster; they shape our products in ways that are hard to see and even harder to undo.

This article is an attempt to explore a different future—one where LLMs don’t replace open source, but force us to reconsider what open source is actually for.


The Dependency Problem Isn’t Technical, It’s Cognitive

We often frame dependency sprawl as:

a security issue

a maintenance issue

a supply-chain issue

But at its core, it’s a cognitive load problem.

To make a “simple” decision—say, upgrading a router or HTTP client—I’m expected to:

understand a large API surface

know the ecosystem politics

assess maintenance health

trust transitive dependencies I’ve never seen

This doesn’t scale. Not for individuals, not for teams.

And worse: it subtly discourages thinking. The safest path becomes “use what everyone else uses,” even when it’s not a great fit.


LLMs Change the Economics of Repetition

LLMs are bad at true invention. They don’t have taste, intuition, or intent. They remix what already exists.

But here’s the thing: most of what we do in software is repetition.

parsing

routing

formatting

state machines

protocol glue

UI primitives

We keep re-solving the same problems, just with slightly different constraints. Open source helped by letting us share solutions. LLMs change the game by making it cheap to re-generate them.

This suggests a shift:

from depending on code

to depending on intent


From Packages to Intent

Instead of saying:

“I use library X”

What if we said:

“I need an HTTP server optimized for low latency under moderate concurrency, with predictable memory usage.”

That intent is stable. The implementation is not.

In an LLM-native model:

the intent is declared explicitly

code is generated and vendored into the repo

tests, benchmarks, and documentation are generated alongside it

regeneration is always possible

The code becomes owned, not borrowed.

This alone solves a large part of the dependency problem—but it raises a more uncomfortable question.


Where Does Creativity Go?

Open source isn’t just about not reinventing the wheel. It’s about inventing better wheels.

If LLMs mostly recombine known solutions, how do we:

discover new approaches?

avoid converging on “average” designs?

ensure we’re choosing the best solution for our product, not just a plausible one?

This is where the story gets interesting.


Innovation Doesn’t Come From Generation, It Comes From Selection

Most breakthroughs in engineering don’t come from a single flash of genius. They come from:

exploring a design space

trying weird variations

measuring what actually works

keeping what survives pressure

LLMs are terrible at deciding what matters. They are very good at enumerating possibilities.

So instead of asking an LLM for the solution, we can ask it for many different ones.

Different architectures. Different tradeoffs. Different assumptions.

Then we let reality—not popularity—choose.

Benchmarks. Failure modes. Operational pain.

This turns code generation into an evolutionary process, where humans define the direction, and machines accelerate the search.


A Concrete Example: HTTP Servers

Consider something as “solved” as an HTTP server.

Today, the decision looks like:

pick the popular framework

accept its abstractions

inherit its performance profile

live with its roadmap

In an intent-driven, evolutionary model, the starting point is different:

intent: capability: http-server constraints: latency_p99_ms: <5 concurrency: 20_000 memory_mb: <50 deployment: single-binary explore: variants: 5

An LLM generates multiple implementations:

epoll-based async

thread-per-core

io_uring experiment

minimal blocking design

aggressively inlined, zero-copy variant

Each ships with:

benchmarks

memory profiles

failure modes

a short explanation of tradeoffs

No single one is “the best”. But together they define a Pareto frontier.

At that point, humans do what they’re good at:

deciding what matters for this product

noticing weird behaviors

pushing new constraints (“What if startup time matters more than throughput?”)

That’s not automation replacing creativity. That’s creativity being given better tools.


Open Source as a Map of Design Space

In this model, open source changes role.

Instead of publishing:

“Here’s my HTTP server library”

You publish:

“Here’s a new point in the HTTP server design space, optimized for X, breaking assumption Y.”

The value shifts from:

maintaining code forever

to:

articulating new constraints

exposing new tradeoffs

naming patterns others hadn’t noticed

Forking becomes cheap. Variants are expected. Popularity stops being the main signal.

That feels closer to the original spirit of open source than today’s dependency monoculture.


A Note on “Open Source” Incentives (The Tailwind Example)

Recent events around Tailwind are instructive.

When Tailwind Labs laid off roughly 75% of its team after traffic shifted toward paid products, it surfaced something many of us intuitively felt already: the core value wasn’t the open source code, it was the distribution channel.

This isn’t a moral judgment. It’s an incentive reality.

The moment an open source project’s survival depends on:

brand gravity

ecosystem lock-in

paid extensions

…it stops being about shared ownership of solutions and starts being about controlled leverage.

An intent- and variant-driven model weakens this dynamic:

value lives in ideas and constraints, not traffic

no single implementation becomes the choke point

innovation is harder to monetize exclusively, but easier to share meaningfully

That’s uncomfortable—but arguably healthier.


Counterarguments (And Why They Matter)

“This will lead to mediocre, average solutions”

Only if selection is intellectual instead of empirical. Measurement kills mediocrity faster than opinion ever did.


“This fragments the ecosystem”

It does—but intentionally.

Fragmentation at the implementation level is fine if the interfaces and benchmarks are shared. Today we fragment APIs instead, which is far worse.


“Most teams don’t want to think about this”

True. And they already don’t.

They just pay the cost later—during incidents, rewrites, or forced migrations. This model front-loads thinking and back-loads stability.


“LLMs will converge to the same patterns anyway”

They will—unless we force diversity:

dissimilarity constraints

human-seeded mutations

explicit exploration budgets

Convergence is a system design choice, not an inevitability.


What Humans Still Do (And Always Will)

LLMs don’t replace creativity because creativity doesn’t live in code.

It lives in:

noticing pain

questioning defaults

redefining “what matters”

inventing new axes of optimization

Humans decide:

what constraints exist

what tradeoffs are acceptable

what “better” even means

Machines just help us explore the consequences faster.


This Isn’t a Finished Answer

I don’t think this replaces today’s ecosystem overnight. I don’t even think it should.

But I do think LLMs give us a chance to:

stop outsourcing thinking to dependencies

regain ownership of our codebases

make open source about ideas again, not artifacts

This article isn’t a proposal. It’s a question:

If we didn’t have to depend on code anymore, what would we choose to share?