Context Engineering: The Art of Feeding Your AI Agent

Prompt engineering was 2023. Context engineering is 2026. And if you’re not thinking carefully about what your coding agent sees, you’re leaving performance on the table.
The Core Insight

Martin Fowler’s team has been dissecting how coding agents actually work, and their latest piece on context engineering reveals a crucial truth: configuration has become a first-class engineering discipline.
The definition is disarmingly simple: “Context engineering is curating what the model sees so that you get a better result.” But the implementation is complex. Claude Code alone now offers CLAUDE.md, Rules, Skills, Subagents, MCP Servers, Hooks, and Plugins—each with different load triggers, scope rules, and use cases.
This isn’t feature bloat. It’s an emerging grammar for human-AI collaboration.
Why This Matters

The Two Types of Reusable Prompts:
– Instructions: “Write an E2E test in the following way: …” — Active commands for specific tasks.
– Guidance: “Always write tests that are independent of each other.” — Passive conventions for ongoing behavior.
The distinction matters because different features handle them differently. CLAUDE.md loads always; Skills load on-demand when the LLM thinks they’re relevant.
Context Interfaces represent a fundamental shift: descriptions that tell the LLM how to get more context. Tools are built-in (bash, file search). MCP Servers are custom APIs. Skills are the newest pattern—lazy-loaded resources the agent can pull in when needed.
Who Decides What Loads?:
– LLM: Maximum automation, but unpredictable. The agent might not load relevant context when you expect it to.
– Human: Maximum control, but you’re back to doing the work yourself.
– Agent Software: Deterministic triggers (like Claude Code hooks) that fire predictably on lifecycle events.
Size Management is Critical: Even with 200K+ token windows, dumping everything into context degrades performance and costs money. The tools that compact conversation history and optimize tool representation (like Claude Code’s “Tool Search Tool”) consistently outperform those that don’t.
Key Takeaways
- Build context gradually. Don’t front-load your CLAUDE.md with everything you might need. Models have gotten smarter; less instruction often works better.
- Skills are absorbing everything. They’re replacing slash commands and will likely absorb rules next. Bet on Skills as the primary context mechanism.
- Subagents unlock parallelism. Run code review in a separate context with a different model for a genuine “second opinion” without context pollution.
- Hooks enable determinism. When you need something to happen every time (auto-formatting, logging, notifications), hooks are the answer.
- Sharing context configs is hard. What works for one team fails for another. Build iteratively; don’t copy-paste from strangers on the internet.
Looking Ahead
Fowler’s piece ends with a crucial warning: this is not actually engineering. You can craft perfect context configurations and the LLM might still misinterpret them. Phrases like “ensure it does X” or “prevent hallucinations” are aspirational, not guarantees.
Context engineering improves probabilities, not certainties. The tools will get better, the patterns will stabilize, and we’ll probably converge on fewer features than today’s explosion. But human oversight remains essential.
We’re in the “storming” phase of a new discipline. The developers who understand both the mechanics (what loads when) and the limitations (LLMs are still probabilistic) will build the most effective workflows. Everyone else will wonder why their carefully configured agents keep surprising them.
Based on analysis of “Context Engineering for Coding Agents” by Martin Fowler’s team