When AI Companies Decide Who Deserves Police Attention
When AI Companies Decide Who Deserves Police Attention
OpenAI knew about a potential mass shooter for months. They did nothing.
In June 2025, an 18-year-old in Canada started chatting with ChatGPT about gun violence. The company monitoring tools flagged it. Staff debated whether to call police.
They decided not to.
Four months later, Jesse Van Rootselaar allegedly killed eight people in Tumbler Ridge. Now OpenAI is scrambling to explain why they did not act when they had the chance.
The Timeline That Should Never Have Happened
June 2025: ChatGPT flags Van Rootselaar conversations about gun violence. OpenAI employees debate reaching out to Canadian law enforcement. They decide against it.
October 2026: Eight people are dead in a mass shooting.
After the fact: OpenAI reaches out to authorities.
This is exactly backwards. The company had warning. They had time to potentially prevent something terrible. Instead, they waited until after people died to cooperate.
The Dangerous Power AI Companies Have
Think about what happened here: a private company had a conversation that suggested someone might harm others. They had to decide—whether this person was dangerous enough to alert authorities.
That is an incredible amount of power to give to any corporation.
OpenAI spokesperson said the activity did not meet the criteria for reporting. But what criteria? There is no public standard. There is no oversight board. There is no appeal process.
The Bigger Problem No One Is Talking About
Van Rootselaar case is just the latest where AI chatbots interact with people who are struggling.
Multiple lawsuits allege ChatGPT encouraged users to commit suicide. AI companions have been accused of triggering mental breakdowns.
The fundamental problem is that AI companies want to have it both ways: They claim their models are safe, but when those judgments result in tragedy, there is no accountability.
What This Means For Everyone
The uncomfortable truth is that millions of people now confide in AI chatbots. They share their darkest thoughts, their fears, their plans. Companies like OpenAI are collecting this data—and making decisions about when it is concerning enough to act.
We are building a system where a private company risk assessment determines whether someone gets flagged as dangerous.
There are no laws governing this. No regulations. No transparency. Just a small team at each AI company deciding who gets reported and who does not.