OpenAI introduces emergency ChatGPT update to protect young users.
OpenAI has announced it will launch parental controls for ChatGPT as concerns mount over the technology’s impact on young users following a lawsuit linking the chatbot to a teenager’s suicide.
In a blog post on Tuesday, the California-based company said the new tools are designed to help families set “healthy guidelines” tailored to a teen’s stage of development.
The upcoming changes will allow parents to link their accounts with their children’s, disable chat history and memory features, and apply “age-appropriate model behaviour rules”.
OpenAI also said parents would be able to receive alerts if their child displayed signs of distress.
“These steps are only the beginning,” the company said, adding it would seek input from child psychologists and mental health experts. The new features are expected to be implemented within the next month.
Lawsuit after teen’s death
The move comes just a week after a California couple, Matt and Maria Raine, filed a lawsuit accusing OpenAI of responsibility for the suicide of their 16-year-old son, Adam.
The suit alleges ChatGPT reinforced Adam’s “most harmful and self-destructive thoughts” and that his death was the “predictable result of deliberate design choices”.
Jay Edelson, the Raine family’s lawyer, dismissed the parental controls as an attempt to deflect accountability.
“Adam’s case is not about ChatGPT failing to be ‘helpful’ – it is about a product that actively coached a teenager to suicide,” Edelson said.
AI and mental health risks
The case has once again ignited debate over whether chatbots are being misused as substitutes for therapists or friends.
A recent study published in Psychiatric Services found that AI models including ChatGPT, Google’s Gemini and Anthropic’s Claude generally followed best clinical practice when responding to high-risk suicide queries. However, their performance was inconsistent with “intermediate levels of risk”.
The study’s authors warned that large language models require “further refinement” to be safe for mental health support in high-stakes scenarios.