The California-based company announced the move in a blog post Tuesday, describing the tool in support of families, “setting healthy guidelines that fit the unique development phase of teenagers.”
The announcement comes a week after California couples Matt and Maria Raine sued Openai, claiming that the chatbot played a role in the suicide of their 16-year-old son Adam.
The parents argue that ChatGpt reinforced Adam’s “most harmful and self-destructive thoughts,” and that his death was a “predictable outcome of intentional design choices.”
Openai, who expressed his apologies for Sad Dol, did not mention the lawsuit in the announcement of parental control.
Family lawyer Jay Edelson has rejected the new measure in an attempt to “change the argument.”
“They say this product is more sensitive to people in crisis, it’s more “helpful,” and it should show a little more “empathy,” and experts should understand it,” Edelson said.
“We strategically understand why they want it. Openai can’t respond to what actually happened to Adam, because Adam’s case isn’t that ChatGpt failed to “help.”
MA/PR
