Context Engineering: The Hidden Art Behind Effective AI Coding Agents

3 min read

The era of “just give it a prompt” is ending. As AI coding agents evolve from novelty to necessity, a new discipline is emerging that separates productive developers from frustrated ones: context engineering.

The Core Insight

Martin Fowler’s team at ThoughtWorks has published a definitive breakdown of what context engineering actually means for coding agents. The definition is deceptively simple: “Context engineering is curating what the model sees so that you get a better result.”

But simplicity hides complexity. Today’s coding agents like Claude Code offer an explosion of configuration options—rules, skills, subagents, MCP servers, hooks, and plugins. Understanding when and how to use each is becoming a critical developer skill.

The key insight is distinguishing between instructions (prompts that tell an agent to do something specific) and guidance (general conventions the agent should follow). These categories blend together, but keeping them mentally separated helps you build more effective agent configurations.

Why This Matters

Context windows have technically gotten huge, but that doesn’t mean you should dump everything in there. Agent effectiveness degrades with too much context, and tokens cost money. The art is in the balance.

Three critical decisions drive context engineering:

  1. Who decides to load context? The LLM can decide autonomously (enables unsupervised operation but adds uncertainty), humans can trigger it explicitly (maintains control but reduces automation), or the agent software can load it deterministically at specific lifecycle points.

  2. How much context? Build up gradually. Models have gotten powerful enough that half of what you stuffed into context six months ago probably isn’t needed anymore.

  3. What format? Your codebase itself is context. AI-friendly code design matters more than ever.

Key Takeaways

  • CLAUDE.md (or AGENTS.md) files are your foundation—always-loaded guidance for general conventions
  • Rules allow path-scoped guidance that only loads when relevant files are touched
  • Skills are the newest evolution—lazy-loaded resources the LLM can pull when it decides they’re needed
  • Subagents let you run parallel tasks in isolated context windows, even with different models
  • MCP Servers give agents structured access to external APIs and tools
  • Hooks provide deterministic actions on lifecycle events (file edits, command execution)

The trend is clear: Skills are absorbing slash commands and may absorb rules next, consolidating the landscape.

Looking Ahead

The author ends with a crucial warning: despite the name, this isn’t really “engineering.” We’re still working with probabilities, not certainties.

“As long as LLMs are involved, we can never be certain of anything… Sometimes people talk about these features with phrases like ‘ensure it does X,’ or ‘prevent hallucinations.’ But we still need to think in probabilities and choose the right level of human oversight for the job.”

This is context engineering’s uncomfortable truth: it dramatically improves results, but can never guarantee them. The developers who internalize this—building robust configurations while maintaining appropriate skepticism—will be the ones who actually ship with AI assistance rather than fighting against it.


Based on analysis of “Context Engineering for Coding Agents” by Martin Fowler


Share this article

Related Articles