The Pragmatic Guide to AI Agent Adoption: Lessons from Mitchell Hashimoto

3 min read



Stop fighting the chatbot. Start engineering the harness.

If you’ve been circling AI coding tools with suspicion—or worse, trying them and bouncing off—you’re not alone. The hype machine says these tools will 10x your productivity. Reality? It’s complicated.

Mitchell Hashimoto (you know, the Vagrant/Terraform/Ghostty guy) recently shared how he actually learned to use AI for coding. What’s refreshing: no hype, no magic claims. Just a systematic approach to figuring out what works.

Drop the Chatbot

Here’s the first thing Hashimoto figured out: ChatGPT and Claude’s web UI kind of suck for coding.

Not because they’re bad—they’re just the wrong tool. You end up copy-pasting code, manually fixing AI mistakes, then copy-pasting again. It’s a frustrating loop.

The real unlock? Agents.

ChatGPT, Claude web interface, Gemini—useful for lots of things, but coding? Meh. The chat interface puts you in copy-paste hell.

The real power shows up when you use agents—LLMs that read your files, run code, and iterate in a loop. Claude Code, Amp, Cursor’s agent mode. That’s where things get interesting.

Do the Same Work Twice (Wait, What?)

This sounds insane, but: Hashimoto recommends reproducing your own work.

Finish a task manually. Then fight an agent to produce the same result—without showing it your solution. Painful? God yes. But this forces you to learn from scratch what actually works:

  • Break sessions into clear tasks (don’t try to “draw the owl” all at once)
  • Separate planning from execution
  • Give agents ways to verify their own work

The flip side matters too: knowing when not to use an agent saves hours. You only learn this through trial and error.

Practical Tips That Actually Work

1. End-of-Day Agent Runs
Reserve the last 30 minutes of your day to kick off agents on research or exploration. You’re not working anyway, and you wake up to a head start. Free productivity.

2. Let Agents Handle the Easy Stuff
Once you know what tasks agents nail consistently, let them run in background while you do manual work. Pro tip: turn off notifications. Context switching kills you. Check agents during natural breaks instead.

3. Build the Harness
Hashimoto’s current obsession: when an agent screws up, engineer a fix so it never happens again.

Two approaches:
AGENTS.md files: prompts that correct bad behaviors. Ghostty’s example shows each line addressing a specific agent failure.
Custom tools: scripts for screenshots, filtered tests, behavior verification.

4. Keep an Agent Running
Goal state: if no agent is running, ask “should one be?” Combine this with slower models for complex changes—30+ minutes but excellent results.

The Honest Take

What I like about Hashimoto’s approach: no bullshit productivity multiplier claims. He talks about “having success” and being “grounded in reality.” That’s refreshingly honest.

Worth noting: delegating tasks means you don’t build skills in those areas. His take—you’re trading off skills in delegated work while still developing skills elsewhere. Choose wisely.

Most honest line in the whole piece: “I’m a software craftsman that just wants to build stuff for the love of the game.”

AI agents aren’t magic wands. They’re tools that need investment, have clear limits, and reward systematic experimentation. The devs who’ll benefit most? The ones willing to do the boring work of figuring out what actually works.


Based on analysis of My AI Adoption Journey by Mitchell Hashimoto

Share this article

Related Articles