You could soon be able to chat with a grown-up version of ChatGPT, if the latest update on the popular AI chatbot is anything to go by.
Earlier this week, it was revealed that OpenAI was preparing to introduce an adults-only mode for its popular chatbot in a move aimed at keeping children out while giving adult users room for more mature and nuanced conversations.
The feature, which is expected to be rolled out as early as 2026, was announced by OpenAI’s CEO of Applications, Fidji Simo, who said the rollout would depend on the success of a new age prediction AI model, which is currently in its testing phase.
Fans of the popular chatbot will have noted that there is a restriction when trying to enquire about content considered mature. This is because the AI has little to no way of knowing the person behind the keyboard who is making the prompts.
The new system, however, is expected to automatically identify users who are under 18 and apply appropriate content restrictions without necessarily relying on self-declared age confirmations.
Traditionally, websites have relied on the use of simple "Are you over 18?" pop-ups. But OpenAI is reportedly aiming to move away from the honour system and instead develop an AI-based approach that infers a user's likely age by analysing conversational behaviour.
The bot will also identify nuances like the conversational behaviour, language patterns and contextual cues, which are almost certain to give away a user's level of thinking, hence exposing their age.
The latest move came amid growing scrutiny of AI platforms and their accessibility to minors. Since these tools are interactive and capable of generating tailored responses, OpenAI says there is a need for stronger safeguards than older internet platforms, which solely relied on warnings or disclaimers as a way to limit liability.
This move is also largely to rival AI platforms, which have experimented with age-limited chatbots. OpenAI now appears ready to formally distinguish family-friendly and adult experiences while maintaining strict safety and legal controls.
ChatGPT has particularly been on the receiving end of criticism from sections of users, who have argued that it was overly restrictive. Some have claimed that its safety guidelines prevent meaningful conversations even when users are only seeking some content for educational purposes.
The age prediction, however, will present its fair share of challenges, especially when children or older teenagers figure out a way to 'trick' the bot using calculated behavioural cues to enable it to interpret maturity based on how they interact. Similarly, an adult can also be restricted in the reverse scenario.
For this reason, OpenAI decided to conduct trials in several countries to test the accuracy and reliability of the age-prediction model, insisting that the adult mode will not launch until the system consistently meets safety and compliance standards.