OpenAI’s Facebook Moment: Why a Researcher Quit Over ChatGPT Ads

3 min read

“I once believed I could help the people building AI get ahead of the problems it would create. This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.”

When Zoë Hitzig, a Harvard economist and former OpenAI researcher, published these words in the New York Times on Wednesday, she didn’t just quit her job. She sounded an alarm that everyone building or using AI should hear.

The Core Insight

Hitzig’s resignation wasn’t about advertising itself being immoral. Her concern cuts deeper: it’s about what ChatGPT knows that Facebook never did.

When Facebook rolled out targeted ads, it learned your demographics, your friend network, your browsing habits. Invasive? Yes. But ChatGPT users have shared something far more intimate: their fears, their relationship problems, their religious doubts, their medical anxieties. They’ve done this, Hitzig argues, “because people believed they were talking to something that had no ulterior agenda.”

She calls this accumulated record “an archive of human candor that has no precedent.”

Now that archive meets advertising — a business model whose entire purpose is influencing behavior for paying clients.

Why This Matters

Hitzig draws a direct parallel to Facebook’s early promises. Remember when Facebook said users would have control over their data? When they could vote on policy changes? Those pledges eroded systematically until the FTC found that privacy changes marketed as giving users “more control” actually did the opposite.

Her prediction is stark: “I believe the first iteration of ads will probably follow those principles. But I’m worried subsequent iterations won’t, because the company is building an economic engine that creates strong incentives to override its own rules.”

This is the ad-supported AI trap: once you optimize for advertiser satisfaction, user trust becomes a resource to extract rather than a principle to protect. Every AI company faces this gravitational pull. OpenAI is now directly in its orbit.

The ads will appear at the bottom of ChatGPT responses for free and low-cost subscribers. They’ll be “clearly labeled” and supposedly won’t influence the chatbot’s answers. But Hitzig’s concern isn’t about today’s implementation — it’s about what happens when quarterly revenue targets meet the world’s most detailed personal dataset.

Key Takeaways

  • Conversational intimacy is different: Search history shows what you wanted to find. Chat history shows what you were afraid to ask anyone else.

  • Good intentions don’t survive business models: OpenAI’s current ad policies may be reasonable. The economic incentives to erode them are enormous.

  • The Facebook trajectory is a warning, not an accident: Surveillance capitalism follows predictable patterns. Recognizing them early is the only defense.

  • Internal dissent is a canary in the coal mine: When researchers who joined to “get ahead of the problems” start leaving, it signals something has shifted.

Looking Ahead

OpenAI isn’t the first company to promise that ads won’t corrupt their product. History suggests those promises have a half-life measured in quarters, not decades.

The real question for users: is the free tier worth the trade-off? For power users: does a $20/month Plus subscription actually buy you protection, or just a delayed exposure?

Hitzig’s resignation is a reminder that AI safety isn’t just about existential risk or alignment problems. It’s also about the mundane corruption that happens when surveillance capitalism meets unprecedented personal data.

OpenAI made a choice this week. Now users get to make theirs.


Based on analysis of “OpenAI researcher quits over ChatGPT ads, warns of ‘Facebook’ path” – Ars Technica

Topics

Share this article

Related Articles