On September 29, OpenAI introduced new parental control features in the web and mobile versions of ChatGPT following a lawsuit from the parents of a teenager who took his own life. The parents claim that the chatbot allegedly provided him with advice on self-harm methods. This was reported by Reuters.

The new settings allow parents to enable enhanced protection by linking their accounts to their teen's account. One party sends an invitation, and the control is activated only after confirmation from the other party, the company explained. Under the new rules, parents will be able to restrict access to sensitive content, monitor whether ChatGPT remembers previous conversations, and decide if these conversations can be used to train OpenAI's models, the company stated on social media platform X.

Parents will also have the ability to set "quiet hours," blocking access at certain times of the day, disable voice mode, and functions for generating and editing images. However, they will not have access to their teen's chat history. In exceptional cases, when systems or moderators detect signs of a serious threat to the child's life or health, parents may receive notifications with the minimum necessary information to protect their child. They will also be informed if their teen unlinks their accounts.

6313 image for slide