Context Engineering: What Developers Actually Need to Know
“Everything is context.” That one phrase nails what’s happening in AI-assisted development right now. The models are smart enough. The real bottleneck? Getting them to understand what you actually want.
This is context engineering. And yeah, it’s different from prompt engineering.
What’s Going On
ThoughtWorks’ Bharani Subramaniam nails it: context engineering is “curating what the model sees so that you get a better result.” Think of it like setting up a workspace for a new team member—except that team member processes everything you show them.
Claude Code gives us a good window into how this works. The config options have multiplied fast:
Instructions vs. Guidance: Instructions tell agents what to do (“Write an E2E test this way”). Guidance sets conventions (“Tests should be independent of each other”). The distinction matters for organization and reuse.
Context Interfaces: Beyond simple prompts, agents now have tools (bash commands, file search), MCP servers (custom programs exposing data and actions), and skills (lazy-loaded resources the LLM accesses on demand).
Loading Decisions: Who decides when context loads? The LLM (for autonomous operation), the human (for control), or the agent software itself (for deterministic behavior at lifecycle events).
The Uncomfortable Truth
Bigger isn’t better. Those 200k context windows? Don’t dump your entire codebase in there. Agent performance tanks with context overload. Costs balloon. Start small, add incrementally.
Words aren’t guarantees. Here’s what nobody tells you: “ensure” and “prevent” in your configs are… suggestions. We’re still working in probabilities. The LLM will interpret things its own way sometimes. Deal with it.
This mess is temporary. We’re in the “throw everything at the wall” phase. Skills will probably eat slash commands eventually. The tooling will consolidate. But if you understand what exists now, you can make smarter choices.
The Current Toolkit
Here’s what you’re working with:
- CLAUDE.md / rules files: Always-loaded guidance for project conventions
- Path-scoped rules: Context that loads only when relevant files are touched
- Skills: Lazy-loaded resources the LLM accesses based on task relevance
- Subagents: Isolated contexts with their own models and tool access
- MCP servers: Custom programs that expose APIs to agents
- Hooks: Scripts that fire at specific moments (not AI-controlled)
What works:
– Build context piece by piece, not all at once
– Share configs within your team—internet-wide sharing gets messy fast
– Actually know what’s in your context. Copied configs can fight each other.
– Watch your context size. Tools that show what’s eating space? Gold.
Bottom Line
Context engineering is becoming table stakes, like knowing git or writing tests. The devs who’ll actually be productive with AI aren’t just good at prompts—they’re good at setting up the information environment.
The models are already powerful enough. Your job is showing them the right stuff.
Based on “Context Engineering for Coding Agents” by Martin Fowler