The What/How Loop: Why LLMs Can’t Replace Understanding in Software Development

4 min read

HERO

Martin Fowler’s conversation with Unmesh Joshi and Rebecca Parsons cuts through the hype around LLM-assisted programming to identify a fundamental truth: programming isn’t about translating requirements into syntax—it’s about building systems that survive change.

The Core Insight

At its core, programming is mapping the “real” domain (the What) onto a computational model (the How). But this isn’t a one-way translation. It’s a continuous feedback loop where understanding the mechanism reveals the true nature of the domain, and understanding the domain clarifies which mechanisms to use.

People new to programming—and crucially, people who hire programmers—often think of development as linear requirements-to-code translation. This misconception manifests in phrases like “human in the loop,” implying the real work happens in the LLM with humans just cleaning up failures.

But as the conversation makes clear: the real challenge isn’t converting requirements to code—it’s building systems that accommodate future change through proper abstraction and structure.

Why This Matters

Unmesh’s experiments with LLMs reveal a telling pattern:

When he asked an LLM to derive an implementation for MinIO (an object store), it produced procedural, hard-to-understand code. When he wrote it himself step by step, he ended up with “fewer and crisper abstractions” that were easier to read and evolve.

When pushed to refactor, LLMs often swing to the opposite extreme—creating too many classes and layers, making designs unnecessarily complicated.

The core problem: LLMs can make code work for a scenario, but they don’t build structure that accommodates future scenarios. Prompts alone satisfy the current test; they don’t create the modularity, naming, and abstraction boundaries that make systems maintainable.

This is why using LLMs to generate test cases for “coverage” misses the point. The goal isn’t passing tests—it’s solidifying solution structure so the “how” can evolve without breaking the “what.”

Key Takeaways

The What/How loop operates at every level:
– System level: What is the user trying to achieve?
– Module level: What is this component supposed to do?
– Function level: What is this specific block for?

Understanding shapes naming, and naming shapes understanding. The answer to “what” determines logical grouping and most importantly the vocabulary of your system. LLMs struggle to develop this vocabulary because they’re optimizing for immediate output, not long-term clarity.

Representation choices matter enormously. Mapping a domain to a state machine vs. a stream vs. a log changes how you reason about problems. These choices are where solution structure emerges—and they require understanding both the domain and the computational options.

TDD operationalizes the what/how feedback loop:
– Writing tests first forces you to answer “what” before getting distracted by implementation
– Making tests pass lets you iterate on “how”
– Refactoring reveals when your API design was awkward or leaky

Declarative programming emerges from stable abstractions. Once abstractions settle, programming increasingly looks like expressing intent through established vocabulary (SQL for queries, makefiles for builds). LLMs work better in these domains precisely because the what/how mapping is already solved.

Looking Ahead

The conversation offers a more nuanced view of LLM-assisted development than the typical “AI will replace programmers” narrative:

Use LLMs as a translation layer inside your what/how loop. They’re excellent for quickly sketching first versions, but you still need to shape structure through writing and refactoring—”because the code I keep is the code I can explain, test, and change with confidence.”

Cognitive load management requires human judgment. Decomposing systems into understandable modules, choosing appropriate abstraction levels, and creating domain-specific vocabularies all require understanding that persists across conversations.

The most important skill isn’t prompt engineering—it’s knowing what to build. LLMs accelerate translation once you know what you want; they don’t replace the iterative discovery process through which you figure that out.

For developers worried about AI displacement: the work that matters most—building systems that survive change through proper abstraction—remains fundamentally human. LLMs are tools within that process, not replacements for it.


Based on analysis of “Conversation: LLMs and the what/how loop” from Martin Fowler’s blog


Topics

Share this article

Related Articles