In response to growing controversy over how artificial intelligence is affecting young people’s mental health, OpenAI has announced plans to implement parental controls for ChatGPT.
The California-based AI company stated in a blog post on Tuesday that it would be rolling out the features in response to families’ requests for assistance “in setting healthy guidelines that fit a teen’s unique stage of development.”
Parents will be able to link their ChatGPT accounts with those of their children under the changes, modify certain features, including memory and chat history, and set up the chatbot’s “age-appropriate model behavior rules” to respond to queries.
According to OpenAI, parents will also be notified when their teen displays signs of distress, adding that the organization will seek expert advice before implementing the feature to “support trust between parents and teens.”
The changes, which were announced last week as part of a series of measures designed to improve safety for vulnerable users, were scheduled to take effect within the month.
The business claimed that “these actions are just the start.”
“We will keep getting better and better at using ChatGPT as much as we can, under the guidance of experts.” In the upcoming 120 days, we’re eager to share our progress.
A California couple filed a lawsuit against OpenAI, alleging responsibility for the suicide of their 16-year-old son, a week after the company made the announcement.
In their lawsuit, Matt and Maria Raine claim that ChatGPT legitimated Adam’s “most harmful and self-destructive thoughts” and that his death was a “predictable outcome of deliberate design decisions.”
In its announcement on parental controls, OpenAI, which previously expressed its condolences over the teen’s passing, did not specifically mention the case.
In their lawsuit, Raine family attorney Jay Edelson criticized OpenAI’s planned changes as an attempt to “shift the debate.”
According to Edelson, “they say the product should just be more sensitive to people in crisis, be more helpful, and show a little more empathy,” according to a statement from Edelson.
We are aware, strategically, that OpenAI cannot respond to Adam’s actual circumstances. Because Adam’s case involves a product that actively coached a teenager to suicide, it is not about ChatGPT being “helpful” in any way.
In light of their widespread adoption as a substitute therapist or friend, people who suffer from severe mental distress have become more concerned about using AI models.
Researchers found that ChatGPT, Google’s Gemini, and Anthropic’s Claude adhered to best-practice guidelines when responding to questions about suicide with “intermediate levels of risk,” but that they did not when responding to queries with “intermediate levels of risk.”
These findings suggest that LLMs can be safely and effectively used for the distribution of mental health information, particularly in high-stakes scenarios involving suicidal ideation, according to the authors.
Source: Aljazeera
Leave a Reply