Suspect in Tumbler Ridge school shooting described violent scenarios to ChatGPT | The Verge

Suspect in Tumbler Ridge school shooting described violent scenarios to ChatGPT | The Verge

The Verge - AI 3 min read Article

Summary

The article discusses the Tumbler Ridge school shooting suspect's alarming interactions with ChatGPT, which raised concerns at OpenAI but did not lead to law enforcement notification.

Why It Matters

This incident highlights critical issues surrounding AI safety, the responsibilities of AI companies in monitoring user interactions, and the potential consequences of inaction in the face of concerning behavior. It raises questions about the thresholds for reporting threats and the role of technology in preventing violence.

Key Takeaways

  • The Tumbler Ridge shooting suspect had alarming interactions with ChatGPT.
  • OpenAI employees raised concerns but the company chose not to alert authorities.
  • The decision not to report is viewed as misguided after the tragic shooting occurred.
  • This case underscores the need for better AI monitoring and response protocols.
  • It raises ethical questions about AI's role in public safety.

AINewsOpenAISuspect in Tumbler Ridge school shooting described violent scenarios to ChatGPTThe posts raised alarms, but OpenAI declined to alert law enforcement.The posts raised alarms, but OpenAI declined to alert law enforcement.by Terrence O'BrienFeb 21, 2026, 3:22 PM UTCLinkShareGiftImage: AFP via Getty ImagesTerrence O'Brien is the Verge’s weekend editor. He has over 18 years of experience, including 10 years as managing editor at Engadget.The suspect in the mass shooting at Tumbler Ridge, British Columbia, Jesse Van Rootselaar, was raising alarms among employees at OpenAI months before the shooting took place. This past June, Jesse had conversations with ChatGPT involving descriptions of gun violence that triggered the chatbot’s automated review system. Several employees raised concerns that her posts could be a precursor to real-world violence and encouraged company leaders to contact the authorities, but they ultimately declined.According to the Wall Street Journal, leaders at the company decided that Rootselaar’s posts did not constitute a “credible and imminent risk of serious physical harm to others.” The company banned Rootselaar’s account, but it does not appear to have taken any further action. We’ve reached out to OpenAI to ask who specifically made that decision, and how it was made, and will update if we hear back.The decision not to alert law enforcement looks misguided in retrospect, as, on February 10th, nine people were killed and 27 injured, including...

Related Articles

Llms

[P] Remote sensing foundation models made easy to use.

This project enables the idea of tasking remote sensing models to acquire embeddings like we task satellites to acquire data! https://git...

Reddit - Machine Learning · 1 min ·
Llms

I stopped using Claude like a chatbot — 7 prompt shifts that reclaimed 10 hours of my week

submitted by /u/ThereWas [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

What features do you actually want in an AI chatbot that nobody has built yet?

Hey everyone 👋 I'm building a new AI chat app and before I build anything I want to hear from real users first. Current AI tools like Cha...

Reddit - Artificial Intelligence · 1 min ·
Llms

So, what exactly is going on with the Claude usage limits?

I'm extremely new to AI and am building a local agent for fun. I purchased a Claude Pro account because it helped me a lot in the past wh...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime