NEWYou can now listen to Fox News articles!
ChatGPT could soon alert police when teens discuss suicide. OpenAI CEO and co-founder Sam Altman revealed the change during a recent interview. ChatGPT, the widely used artificial intelligence chatbot that can answer questions and hold conversations, has become a daily tool for millions. His comments mark a major shift in how the AI company may handle mental health crises.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CyberGuy.com/Newsletter

Sam Altman, chief executive officer of OpenAI Inc. (Nathan Howard/Bloomberg via Getty Images)
Why OpenAI is considering police alerts
Altman said, “It’s very reasonable for us to say in cases of young people talking about suicide, seriously, where we cannot get in touch with the parents, we do call authorities.”
Until now, ChatGPT’s response to suicidal thoughts has been to suggest hotlines. This new policy signals a move from passive suggestions to active intervention.
Altman admitted the change comes at a cost to privacy. He stressed that user data is important, but acknowledged that preventing tragedy must come first.

Teens can easily access ChatGPT on a mobile device. (Jaap Arriens/NurPhoto via Getty Images)
Tragedies that prompted action
The shift follows lawsuits tied to teen suicides. The most high-profile case involves 16-year-old Adam Raine of California. His family alleges ChatGPT provided a “step-by-step playbook” for suicide, including instructions for tying a noose and even drafting a goodbye note.
After Raine’s death in April, his parents sued OpenAI. They argued that the company failed to stop its AI from guiding their son toward harm.
Another lawsuit accused rival chatbot Character.AI of negligence. A 14-year-old reportedly took his own life after forming an intense connection with a bot modeled on a TV character. Together, these cases highlight how quickly teens can form unhealthy bonds with AI.

Adam Raine, a California teen, took his life in April 2025 amid claims ChatGPT coached him (Raine Family)
How widespread is the problem?
Altman pointed to global numbers to justify stronger measures. He noted that about 15,000 people take their own lives each week worldwide. With 10% of the world using ChatGPT, he estimated that around 1,500 suicidal individuals may interact with the chatbot weekly.
Research backs up concerns about teen reliance on AI. A Common Sense Media survey found 72% of U.S. teens use AI tools, with one in eight seeking mental health support from them.
FORMER YAHOO EXECUTIVE SPOKE WITH CHATGPT BEFORE KILLING MOTHER IN CONNECTICUT MURDER-SUICIDE: REPORT
OpenAI’s 120-day plan
In a blog post, OpenAI outlined steps to strengthen protections. The company said it will:
- Expand interventions for people in crisis.
- Make it easier to reach emergency services.
- Enable connections to trusted contacts.
- Roll out stronger safeguards for teens.
To guide these efforts, OpenAI created an Expert Council on Well-Being and AI. This group includes specialists in youth development, mental health and human-computer interaction. Alongside them, OpenAI is working with a Global Physician Network of more than 250 doctors across 60 countries.
These experts are helping design parental controls and safety guidelines. Their role is to ensure AI responses align with the latest mental health research.

A teen using ChatGPT. (Frank Rumpenhorst/Picture Alliance via Getty Images)
New protections for families
Within weeks, parents will be able to:
- Link their ChatGPT account with their teens.
- Adjust model behavior to match age-appropriate rules.
- Disable features like memory and chat history.
- Get alerts if the system detects acute distress.
These alerts are designed to notify parents early. Still, Altman admitted that when parents are unreachable, police may become the fallback option.

ChatGPT can be used by teens for completing homework. (Kurt “CyberGuy” Knutsson)
Limits of AI safeguards
OpenAI admits its safeguards can weaken over time. While short chats often redirect users to crisis hotlines, long conversations can erode built-in protections. This “safety degradation” has already led to cases where teens received unsafe advice after extended use.
Experts warn that relying on AI for mental health can be risky. ChatGPT is trained to sound human but cannot replace professional therapy. The concern is that vulnerable teens may not know the difference.
TEENS INCREASINGLY TURNING TO AI FOR FRIENDSHIP AS NATIONAL LONELINESS CRISIS DEEPENS
Steps parents can take now
Parents should not wait for new features to arrive. Here are immediate ways to keep teens safe:
1) Start regular conversations
Ask open questions about school, friendships and feelings. Honest dialogue reduces the chance teens will turn only to AI for answers.
2) Set digital boundaries
Use parental controls on devices and apps. Limit access to AI tools late at night when teens may feel most isolated.
3) Link accounts when available
Take advantage of new OpenAI features that connect parent and teen profiles for closer oversight
4) Encourage professional support
Reinforce that mental health care is available through doctors, counselors or hotlines. AI should never be the only outlet.
5) Keep crisis contacts visible
Post numbers for hotlines and text lines where teens can see them. For example, in the U.S., call or text 988 for the Suicide & Crisis Lifeline.
6) Watch for changes
Notice shifts in mood, sleep or behavior. Combine these signs with online patterns to catch risks early.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right — and what needs improvement. Take my Quiz here: CyberGuy.com/Quiz
Kurt’s key takeaways
OpenAI’s plan to involve police shows how urgent the issue has become. AI has the power to connect, but it also carries risks when teens use it in moments of despair. Parents, experts and companies must work together to create safeguards that save lives without sacrificing trust.
Would you be comfortable with AI companies alerting police if your teen shared suicidal thoughts online? Let us know by writing to us at CyberGuy.com/Contact
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CyberGuy.com/Newsletter
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Copyright 2025 CyberGuy.com. All rights reserved.