- Chatgpt will start to estimate user ages according to conversation models
- People identified as adolescents will be fell to a filtered experience and specific to adolescents
- Parents will have new tools to link accounts, set limits for use and receive alerts on the mental state of their adolescents
Openai means that the Chatppt acts as a bouncer in a club, considering your age before deciding to let you enter. AI will not use your date of birth (perhaps invented) or your identifier, but how you interact with the chatbot.
If the system suspects that you are under the age of 18, it will automatically transform you to a more limited version of the chatbot designed specifically to protect adolescents from inappropriate content. And if it is not sure, it will be mistaken on the side of prudence. If you want the adult version of Chatgpt to come back, you may need to prove that you are old enough to buy a lottery ticket.
The idea that generating AI should not treat everyone in the same way is certainly understandable. Especially with adolescents using AI, Openai must consider the unique group of risks involved. Chatgpt experience specific to adolescents will limit discussions on subjects such as sexual content and offer more delicate manipulation of subjects such as depression and self -harm. And although adults can always talk about these subjects in context, users of adolescents will see much more “Sorry, I cannot help with that” when I wade in sensitive areas.
To determine your age, Chatgpt will return to your conversation and seek models that indicate age, especially that someone is under 18. Chatgpt assumptions of your age can come from the types of questions you ask, from your writing style, how you react to be corrected, or even which emoji you prefer. If you trigger its teenage alarm ringtones, in the age adapted, you go.
You might be 27 years old and ask questions about the anxiety of career change, but if you type like a high school high school, you could tell your parents about your spiral worries.
Openai admitted that there could be mistakes, because “even the most advanced systems will sometimes have trouble predicting age”. In these cases, they will be by default the safer mode and will offer adults to prove their age and to regain access to the adult version of Chatgpt.
Emotionally safe models
This new age prediction system is the centerpiece of the next OpenAi phase of improving adolescent safety. There will also be new parental checks to come later this month. These tools will allow parents to link their own accounts with their children, limit access during certain hours and receive alerts if the system detects what it calls “acute distress”.
Depending on the severity of the situation and if the parents cannot be joined, OpenAi can even contact the law enforcement organizations according to the conversation.
Making ChatPPT an orientation advisor for adolescents via integrated content filters is a notable change in itself. Doing it without the user being accompanied is an even greater swing because it means that AI decides not only of your age, but how your experience should differ from the chatgpt conversation of an adult.
So if Chatgpt is starting to become more prudent or strangely sensitive, you should check if you have suddenly been tagged in adolescence. You might just have a creative or young writing style, but you will always have to prove that you are legally an adult if you want to have wider discussions.
Maybe talk about your injured back without reason or how music is not as good as before convincing AI of your elderly diplomas.