From Skeptic to Practitioner: A Six-Step Framework for AI Agent Adoption

Ever spent more time fighting with an AI tool than actually getting work done? You’re not alone. Mitchell Hashimoto—creator of Vagrant, Terraform, and now Ghostty—recently shared his journey from AI skeptic to someone who can’t imagine working without agents. The key insight? Most people give up during the friction phase before discovering what actually works.
The Core Insight

Hashimoto’s framework reveals a counterintuitive truth: the chatbot is holding you back. While everyone’s first AI coding experience involves pasting code into ChatGPT, this approach fundamentally limits what you can accomplish. The breakthrough comes when you switch to agents—LLMs that can read files, execute programs, and make HTTP requests in a continuous loop.
But here’s the real genius of his approach: instead of expecting immediate productivity gains, he forced himself to reproduce his own manual work with agents. Same task, done twice. Excruciating? Yes. But this deliberate friction built the muscle memory for knowing exactly when agents excel and when they fail.
The key principles that emerged:
– Break sessions into clear, actionable tasks—no “drawing the owl” in one mega session
– Split vague requests into planning vs. execution phases
– Give agents verification tools so they can catch their own mistakes
Why This Matters

We’re at an inflection point where AI tools are sophisticated enough to be genuinely useful, but the adoption curve still weeds out most practitioners. Hashimoto’s framework addresses the two failure modes that kill most AI workflows:
The Babysitter Trap: Many developers hover over their agents, constantly context-switching to check progress. His solution? Turn off desktop notifications entirely. The human should control interruption timing, not the agent.
The Skill Atrophy Problem: Anthropic’s research suggests heavy AI use may inhibit skill formation. Hashimoto’s counter: while delegating “slam dunk” tasks to agents, he works manually on challenging problems. You’re not abandoning skill development—you’re choosing which skills to develop.
The “harness engineering” concept deserves special attention. Every time an agent makes a mistake, you invest in preventing that mistake forever. This could mean updating AGENTS.md files or building actual programmatic tools. It’s a compound interest play—each fix makes all future agent sessions more efficient.
Key Takeaways
- Drop the chatbot first: Agents with file/program/HTTP access outperform chat interfaces for real development work
- Reproduce before producing: Do work twice to build intuition for agent capabilities
- End-of-day agents: Use the last 30 minutes to kick off research, triage, or exploratory tasks that give you a warm start the next morning
- Outsource the slam dunks: Run agents on high-confidence tasks while you do deep work elsewhere
- Engineer the harness: Every agent mistake is an opportunity to prevent future mistakes
- Always have an agent running: Even at 10-20% utilization, background agents compound productivity
Looking Ahead
The most striking aspect of Hashimoto’s framework is its humility. He explicitly notes he’s not running agents just for the sake of running them—each task must provide genuine value. And he acknowledges the rapid pace of model improvement means his priors need constant revision.
For those building AI-assisted workflows, the meta-lesson is clear: sustainable AI adoption isn’t about finding the perfect tool or prompt. It’s about systematically building expertise through deliberate practice, even when that practice feels inefficient. The developers who’ll thrive aren’t those who adopt fastest—they’re those who adopt most thoughtfully.
As Hashimoto puts it: “I’m a software craftsman that just wants to build stuff for the love of the game.” AI doesn’t change that. It just changes how we play.
Based on analysis of “My AI Adoption Journey” by Mitchell Hashimoto