Context Engineering: The New Skill That Separates Good Developers from Great Ones

4 min read

HERO

Here’s an uncomfortable truth: the best prompt you’ve ever written is probably already outdated. As coding agents evolve at breakneck speed, a new discipline is emerging that’s far more important than prompt engineering—and most developers haven’t even heard of it yet.

The Core Insight

The Core Insight

Martin Fowler’s team at Thoughtworks has coined a term that deserves your attention: Context Engineering. Their simple definition cuts right to the heart of it: “Context engineering is curating what the model sees so that you get a better result.”

This isn’t just about writing better prompts. It’s about architecting the entire information environment your AI assistant operates in. Think of it less like writing a letter and more like designing a cockpit—every dial, every screen, every piece of information carefully positioned for maximum effectiveness.

The landscape has exploded in the past few months. Claude Code alone now offers CLAUDE.md files, modular rules, skills, subagents, MCP servers, hooks, and plugins. That’s not feature bloat—that’s the acknowledgment that “one-shot prompts” were never going to cut it for serious work.

Why This Matters

Why This Matters

The control problem is real. Context engineering exists because LLMs are probabilistic, not deterministic. You can configure the perfect instruction set, but execution still depends on how the model interprets it. As Fowler’s team puts it: “We can never be certain of anything—we still need to think in probabilities.”

But here’s the counterintuitive insight: less is often more. Even though context windows have grown enormous, dumping everything in there degrades performance. The model’s effectiveness goes down with too much context, and your costs go up. The art is in the curation.

Three questions now matter more than “what should I prompt?”:
1. Who decides to load context? (LLM? Human? The agent software itself?)
2. When is it loaded? (Always? On-demand? Based on file paths?)
3. How much is too much? (What’s the signal-to-noise ratio?)

Key Takeaways

  • CLAUDE.md is table stakes. Every major coding assistant now has a “rules file” equivalent. The emerging standard is AGENTS.md for cross-tool compatibility.

  • Path-scoped rules are the next evolution. Instead of one giant instruction file, rules can now trigger based on file patterns (.ts for TypeScript, .sh for shell scripts). This keeps context lean.

  • Skills beat slash commands. The trend is toward “lazy loading”—letting the LLM decide when to pull in additional context based on relevance. It’s more efficient and more intelligent.

  • Subagents enable true orchestration. Running code review in a separate context with a different model gives you a “second opinion” without the baggage of your original session.

  • Hooks bring determinism back. When you need something to happen every single time (like running prettier after editing a JS file), don’t rely on the LLM to remember—automate it.

Looking Ahead

We’re in what Fowler’s team calls a “storming” phase—too many features, too many approaches, not yet enough convergence on best practices. Expect this to simplify. Skills will probably absorb slash commands and rules. Cross-tool standards like AGENTS.md will emerge.

But the fundamental insight won’t change: the quality of your context determines the quality of your results. The developers who master context engineering will build better software faster. Those who ignore it will keep wondering why their AI assistant “doesn’t quite get it.”

One critical warning from the Thoughtworks team deserves special emphasis: beware the illusion of control. People talk about context features with phrases like “ensure it does X” or “prevent hallucinations.” But as long as LLMs are involved, we’re managing probabilities, not guarantees. Choose the right level of human oversight for the job.

The prompt engineering era taught us to talk to machines. The context engineering era is teaching us something harder: how to create environments where machines can actually think.


Based on analysis of “Context Engineering for Coding Agents” by Martin Fowler / Thoughtworks

Share this article

Related Articles