Asher Cohen
Back to posts

Working with LLMs

🧠 The hardest part about building with AI isn’t getting it to work— it’s getting it to improve without destroying

When you ask an LLM to “refactor this,” “improve that,” or “make it cleaner,” you expect small, surgical edits. What you often get is a rewrite that loses nuance, breaks structure, or wipes out intentional quirks. Whether it's rewriting emotional text into LinkedIn boilerplate, or flattening a custom React component into generic nonsense, the same problem shows up: LLMs don't understand the difference between editing and replacing.

Why? Because LLMs are trained to generate the best next thing, not the smallest valid change. They don’t see authorial intent. They don’t know which parts are sacred and which are expendable. Unless you build in that context, “help” turns into harm.

I’ve tried everything:

  • Prompt engineering (“only fix grammar, keep tone”)

  • System messages

  • Custom DSLs

  • Diff-based prompting

  • Guardrails

They help, but they’re brittle. The truth is, most LLMs don’t edit — they overwrite.

What works better:

Building structured awareness: ASTs for code, segments and labels for writing

Freezing parts of a document with hard constraints

Using embedding similarity to block destructive changes

Reviewing AI suggestions as staged diffs, not in chat bubbles

Replacing chat-based UX with tooling that gives real control over scope and intention

If you want AI to feel like a collaborator and not a bull in a china shop, the core idea is this: be explicit about what should never change — and give it architectural boundaries to work within.

AI won't magically respect your style, your constraints, or your edge cases — unless you teach it how, and where to stop.

This principle is now baked into how I design tools and workflows involving LLMs — from content pipelines to internal dev tooling. And honestly, it applies far beyond code.

Curious how others are dealing with this: Have you found patterns that help AI improve without erasing?