Martin Fowler’s Fragments: AI and the Future of Software
What happens to software development when the code itself becomes non-deterministic? Martin Fowler’s latest “Fragments” gathers industry voices on AI’s impact—and the picture is more nuanced than hype or doom.
The Core Insight
At a recent gathering of software leaders (operating under Chatham House Rule), the conversation wasn’t about AI replacing developers. It was about something more subtle: cognitive debt, model building, and the very nature of source code.
One attendee’s quip captured the mood: “LLMs are drug dealers—they give us stuff, but don’t care about the resulting system or the humans that develop and use it.”
Why This Matters
The software industry is experiencing its biggest paradigm shift since object-oriented programming. But unlike previous shifts, this one comes with a twist: the outputs are non-deterministic. Run the same prompt twice, get different code. This breaks fundamental assumptions about source control, testing, and reproducibility.
Key questions emerging:
– Cognitive debt: When LLMs generate code, do teams still learn the domain?
– Model building: Is this core skill being devalued?
– Source code’s future: Will prompts replace .java files?
Key Takeaways
On Cognitive Debt:
The TDD cycle includes a crucial refactoring step where developers consolidate understanding into the codebase. With LLMs writing code, do we lose this learning? One suggestion: ask the LLM to explain its code—perhaps as a fairy tale.
On Non-Determinism:
Prompts and natural language can elicit behavior, but also introduce non-determinism. Is there still a role for persistent, deterministic representations of software behavior?
On Language Workbenches:
Nearly two decades ago, “language workbenches” promised tools that persisted a semantic model (not necessarily human-readable). Projectional editors created human-readable views. Could this be the future of “source code”—designed to maximize expression with minimal tokens?
On Fun:
Programmers worry LLMs will remove the joy of programming. Fowler notes: delivering useful features will improve. But model building—creating abstractions that help reason about domains—might be at risk. Or perhaps it becomes essential for working effectively with LLMs.
Looking Ahead
The conversation reveals more questions than answers:
– How do we maintain understanding when LLMs write code?
– What does “source code” mean in an AI age?
– How do we handle non-determinism in critical systems?
The honest answer: we don’t know yet. But these discussions—honest, skeptical, nuanced—are exactly what the industry needs.
Based on analysis of Martin Fowler’s Fragments (February 9, 2026)