The Cursor Paradox: Why AI-Assisted Developers Take 19% Longer (And Think They’re Faster)
Here’s a statistic that should stop every AI-hyped developer in their tracks: when experienced developers use AI coding tools, they take 19% longer to complete tasks—while simultaneously believing they’re working 24% faster.
This might be the most important AI development study you’ll read this year.
The Core Insight
The nonprofit research organization METR (Model Evaluation and Threat Research) just published findings that challenge the entire narrative around AI coding assistants. They recruited 16 experienced developers who work on large open source repositories, paid them $150/hour to fix 136 real issues, recorded 146 hours of screen footage, and discovered something remarkable:
The gap between perception and reality is wild. Developers expected AI to speed them up by 24%. Even after experiencing the actual slowdown, they still believed AI had accelerated their work by 20%.
This isn’t a study about Cursor specifically, though that’s what most participants used (with Sonnet 3.5/3.7). It’s a study about the fundamental nature of AI-assisted development.
Why This Matters
The breakdown of how developers spent their time tells the story:
With AI: Less time coding, researching, testing. More time prompting, waiting on AI, reviewing AI output, dealing with IDE overhead.
Without AI: More time coding, researching, testing. None of the AI-related overhead.
The time saved on coding, research, and testing was completely wiped out by the additional time spent prompting, waiting, and reviewing. The net effect? Slower completion.
But here’s where it gets interesting. One developer with 50+ hours of Cursor experience saw a 38% speedup. The learning curve isn’t just steep—it’s transformative for those who push through it.
The Zone Problem
Gergely Orosz raises a provocative hypothesis about what’s really happening:
“As a dev, the most productive work I do is when I’m in ‘the zone,’ just locked into a problem with no distractions. But I cannot stay in the zone when using a time-saving AI coding tool; I need to do something else while code is being generated, so context switches are forced, and each one slows me down.”
Could it be that AI tools inherently fragment the deep focus state that experienced developers rely on? The forced context switches might be the hidden tax that cancels out the productivity gains.
The Expert’s Framework
PhD student Quentin Anthony, the lone developer who achieved the 38% speedup, offers a masterclass in effective AI usage:
On capability spikes: “LLMs today have super spiky capability distributions. I only use LLMs when I know they can reliably handle the task.” He specifically notes that LLMs are terrible at low-level systems code, GPU kernels, and parallelism—but excellent at writing tests and understanding unfamiliar code.
On time-boxing: “When determining whether some new task is amenable to an LLM, I try to aggressively time-box my time working with the LLM so that I don’t go down a rabbit hole.”
On the dopamine trap: “We like to say that LLMs are tools, but treat them more like a magic button. Do you keep pressing the button that has a 1% chance of fixing everything? It’s a lot more enjoyable than the grueling alternative.”
On generation downtime: “It’s super easy to get distracted in the downtime while LLMs are generating. The social media attention economy is brutal.” His solution: fill generation time productively with subtasks or thinking about follow-up questions.
Key Takeaways
- The learning curve is real and steep. Most developers haven’t crossed it yet—44% of study participants had never used Cursor before
- AI speedup has little correlation with developer skill. All devs in the study were highly capable; the difference was in workflow adaptation
- Self-awareness is the meta-skill. Understanding both AI limitations and your own failure modes is essential
- Time-boxing prevents rabbit holes. Tearing yourself away when the AI is “just so close” requires discipline
- Context switching may be the hidden cost. The “zone” state might be worth more than the time AI saves
Looking Ahead
Simon Willison interprets this study as evidence that “the learning curve of AI-assisted development is high enough that asking developers to bake it into their existing workflows reduces their performance while they climb that learning curve.”
This matches the Pragmatic Engineer’s own research: developers who haven’t used AI tools for longer than 6 months are significantly more likely to have negative perceptions. Many try AI tools, find they don’t meet expectations, and abandon them—potentially right before the breakthrough.
The path forward isn’t to reject AI coding tools. It’s to approach them with realistic expectations about the investment required, to develop the self-awareness to know when AI helps versus hurts, and to resist the allure of the magic button while building the skills to use these tools effectively.
The 38% speedup is real. But so is the 19% slowdown. Which side you land on depends entirely on whether you’re willing to put in the deliberate practice to climb the learning curve.
Based on: “Cursor makes developers less effective?” by Gergely Orosz / The Pragmatic Engineer