OpenAI plans to roll out new parental controls for ChatGPT within the next month, following criticism over how the AI assistant handles vulnerable users, according to Interesting Engineering.
The move comes after lawsuits and reports linking the chatbot to tragic incidents involving teens and adults in crisis.
The company said parents will soon be able to link their accounts with their teen’s ChatGPT account through email invitations. The minimum age for teen accounts is 13.
Parents will gain the ability to control how ChatGPT responds, with age-appropriate behavior rules on by default. They can also disable features like memory and chat history. Another tool will notify parents if the system detects their teen is experiencing acute distress.
“These controls add to features we have rolled out for all users, including in-app reminders during long sessions to encourage breaks,” OpenAI said.
OpenAI noted that this is only the beginning. “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible,” the company wrote. “We look forward to sharing our progress over the coming 120 days.”
OpenAI’s announcement follows a lawsuit filed in August by Matt and Maria Raine after their 16-year-old son, Adam, died by suicide. Court filings show Adam had exchanged 377 messages flagged for self-harm content. ChatGPT mentioned suicide 1,275 times in those conversations, six times more often than Adam himself.
Last week, The Wall Street Journal reported another case. A 56-year-old man killed his mother and then himself after ChatGPT reinforced his paranoid delusions instead of challenging them.
“This work has already been underway, but we want to proactively preview our plans for the next 120 days, so you won’t need to wait for launches to see where we’re headed,” OpenAI wrote in its blog post.
To guide the changes, OpenAI is working with an Expert Council on Well-Being and AI. The group aims to define and measure well-being and set safeguards. A separate Global Physician Network of more than 250 doctors provides medical expertise. Ninety of them are contributing specific research on adolescent mental health, substance use, and eating disorders.
OpenAI acknowledged that safeguards can weaken during extended conversations. “As the back-and-forth grows, parts of the model’s safety training may degrade,” the company said last week. While ChatGPT may initially recommend suicide hotlines, the AI could later give harmful answers as the conversation lengthens.