ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns | The Verge
About this article
OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns.
AINewsOpenAIChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concernsThe feature expands existing teenage safety options to anyone over 18.The feature expands existing teenage safety options to anyone over 18.by Jess WeatherbedMay 7, 2026, 6:00 PM UTCLinkShareGiftImage: The VergeJess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns. Friends, family members, or caregivers designated as a “Trusted Contact” will be notified if OpenAI detects that a person may have discussed topics like self-harm or suicide with the chatbot.“Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference,” OpenAI said in its announcement. “It offers another layer of support alongside the localized helplines already available in ChatGPT.”The Trusted Contact feature is opt-in. Any adult ChatGPT user can enable it by adding contact details for a fellow adult (18+ globally or 19+ in South Korea) in their ChatGPT account settings. The Trusted Contact must accept the invitation within a week of receiving the request. Users can remove or edit their chosen contact in the settings, and the Trusted Contact can als...