OpenAI and Anthropic have unveiled new initiatives aimed at enhancing the safety of teenagers interacting with their technologies. These measures come in response to growing criticism about the ineffectiveness of current age verification methods, which often only involve confirming a birth date.
OpenAI has announced an update to the specifications for its ChatGPT model, tailoring it for a teenage audience. The document emphasizes that the safety of younger users is a priority, even when it contradicts other business goals. The model will now encourage offline communication and support while responding to teenagers with "warmth and respect".
This signifies a shift in communication style, moving away from a condescending approach and treating young users as equals. Such changes have been prompted by incidents where interactions with AI led to tragic outcomes, including suicides after prolonged conversations with chatbots that consistently agreed with users' views.
On its part, Anthropic has prohibited the use of its Claude model by minors. Now, only users who are 18 and older can register for an account. The company also plans to employ algorithms that detect age-related signs in conversations and block accounts belonging to underage users.
Similar measures were previously implemented by Google, but their effectiveness faced scrutiny. Some adult users reported false blocks that required additional age verification, causing difficulties for those simply wanting to use search functions or watch videos.
Experts believe that the new initiatives by OpenAI and Anthropic could be a significant step towards protecting teenagers from the adverse effects of AI, though the effectiveness of these systems still needs to be tested in practice.
-
New Initiatives by OpenAI and Anthropic to Safeguard Teens
- New Initiatives by OpenAI and Anthropic to Safeguard Teens 22 December, 2025