Suspect in Tumbler Ridge school shooting described violent scenarios to ChatGPT | The Verge
Summary
The article discusses the Tumbler Ridge school shooting suspect's alarming interactions with ChatGPT, which raised concerns at OpenAI but did not lead to law enforcement notification.
Why It Matters
This incident highlights critical issues surrounding AI safety, the responsibilities of AI companies in monitoring user interactions, and the potential consequences of inaction in the face of concerning behavior. It raises questions about the thresholds for reporting threats and the role of technology in preventing violence.
Key Takeaways
- The Tumbler Ridge shooting suspect had alarming interactions with ChatGPT.
- OpenAI employees raised concerns but the company chose not to alert authorities.
- The decision not to report is viewed as misguided after the tragic shooting occurred.
- This case underscores the need for better AI monitoring and response protocols.
- It raises ethical questions about AI's role in public safety.
AINewsOpenAISuspect in Tumbler Ridge school shooting described violent scenarios to ChatGPTThe posts raised alarms, but OpenAI declined to alert law enforcement.The posts raised alarms, but OpenAI declined to alert law enforcement.by Terrence O'BrienFeb 21, 2026, 3:22 PM UTCLinkShareGiftImage: AFP via Getty ImagesTerrence O'Brien is the Verge’s weekend editor. He has over 18 years of experience, including 10 years as managing editor at Engadget.The suspect in the mass shooting at Tumbler Ridge, British Columbia, Jesse Van Rootselaar, was raising alarms among employees at OpenAI months before the shooting took place. This past June, Jesse had conversations with ChatGPT involving descriptions of gun violence that triggered the chatbot’s automated review system. Several employees raised concerns that her posts could be a precursor to real-world violence and encouraged company leaders to contact the authorities, but they ultimately declined.According to the Wall Street Journal, leaders at the company decided that Rootselaar’s posts did not constitute a “credible and imminent risk of serious physical harm to others.” The company banned Rootselaar’s account, but it does not appear to have taken any further action. We’ve reached out to OpenAI to ask who specifically made that decision, and how it was made, and will update if we hear back.The decision not to alert law enforcement looks misguided in retrospect, as, on February 10th, nine people were killed and 27 injured, including...