The AI Coding Paradox: Why Cursor Makes Developers 19% Slower (And What This Means for You)

4 min read

HERO

Picture this: you’ve just installed Cursor, the hottest AI coding assistant everyone’s raving about. You’re ready to become a 10x developer overnight. But what if the very tool promising to boost your productivity is actually making you slower?

The Core Insight

The Core Insight

A groundbreaking study by METR (Model Evaluation and Threat Research) has just dropped a bombshell on the AI development community. They recruited 16 experienced developers working on large open-source repositories, had them fix 136 real issues at $150/hour, and recorded 146 hours of footage. The results?

Developers using AI tools took 19% longer than those without.

But here’s the kicker: even after experiencing this slowdown, developers still believed AI had sped them up by 20%. The gap between perception and reality is striking—and frankly, a little unsettling.

Why This Matters

Why This Matters

The study reveals a fascinating breakdown of where time actually goes:

  • Less time coding — Yes, AI does write code faster
  • Less time researching and testing — AI handles some of this too
  • But more time prompting, waiting on AI, reviewing its output, and dealing with “IDE overhead”

The extra time spent wrangling AI completely wiped out the time it saved on actual coding. This isn’t about AI being bad—it’s about the hidden costs we’re not accounting for.

The Learning Curve Tax

Simon Willison, a respected voice in the AI dev tools space, offers a crucial insight: “This study mainly demonstrated that the learning curve of AI-assisted development is high enough that asking developers to bake it into their existing workflows reduces their performance while they climb that learning curve.”

There’s supporting data here. The one developer in the study who had used Cursor for over 50 hours saw a 38% speed increase. That’s a massive delta from the 19% slowdown everyone else experienced.

The Zone Problem

Here’s something that’s been nagging at me since reading this study: what if context switching is AI coding’s Achilles Heel?

As developers, our most productive work happens when we’re “in the zone”—locked into a problem with zero distractions. But AI tools force us out of that zone constantly. You prompt, wait, review, correct, prompt again. Each cycle is a context switch. Each context switch has a cost.

The developers without AI tools might have simply stayed in the zone longer, working at a higher performance level than their AI-assisted peers who were constantly interrupted by their helpful robot sidekick.

Key Takeaways

  • AI proficiency is not correlated with general dev ability — All developers in this study were experienced, yet AI speedup varied wildly
  • LLMs have spiky capability distributions — They’re excellent at tasks with abundant clean training data, terrible at low-level systems code, GPU kernels, and parallelism
  • The “one more prompt” trap is real — The temptation to keep asking the AI when “it’s just so close!” burns massive amounts of time
  • Downtime during generation is dangerous — It’s easy to lose 30 minutes scrolling while “waiting” for a 30-second generation

Quentin Anthony’s Practical Advice

Quentin Anthony, the lone 50+ hour Cursor user who achieved that 38% speedup, shared some hard-won wisdom:

  1. Know which tasks are LLM-friendly — Writing tests, understanding unfamiliar code? Yes. Writing kernels, communication semantics? No.
  2. Aggressively time-box AI interactions — Don’t go down rabbit holes
  3. Fill generation time productively — Work on subtasks, think about follow-up questions, handle email
  4. Practice digital hygiene — Website blockers, phone on DND. Old advice, but it works.

Looking Ahead

This study doesn’t mean you should abandon AI coding tools. It means we need to approach them with the same rigor we apply to any engineering decision.

The 50+ hour threshold is telling. There’s likely a significant investment required before AI tools pay dividends. Organizations pushing developers to adopt AI tools should factor in this learning curve—and perhaps provide structured training rather than expecting immediate productivity gains.

More importantly, we need better metrics. Asking developers how productive they feel is clearly unreliable. We need objective measurements of actual output, accounting for code quality, bug rates, and long-term maintainability—not just task completion time.

The AI coding revolution isn’t cancelled. But it might be more of a marathon than a sprint. The question isn’t whether to use these tools—it’s whether you’re willing to invest the time to use them well.


Based on analysis of “Cursor makes developers less effective?” from The Pragmatic Engineer Newsletter

Share this article

Related Articles