OpenAI has announced that it has formed a global network of over 170 mental health experts to help its artificial Intelligence Chatbot, ChatGPT, in responding to emotional or psychological issues.
The company said that the experts, such as psychologists, psychiatrists and primary care doctors from over 60 countries, will be responsible for safely handling users showing signs of emotional or psychological distress.
“We worked with more than 170 mental health experts to help ChatGPT more reliably recognise signs of distress, respond with care, and guide people toward real-world support – reducing responses that fall short of our desired behaviour by 65-80 per cent,” the company stated.
It added that experts have been advised on how the AI tool should handle sensitive conversations involving mania, psychosis, or suicidal thoughts.
“We’ve updated the Model Spec to make some of our longstanding goals more explicit: that the model should support and respect users’ real-world relationships and avoid affirming ungrounded beliefs that potentially relate to mental or emotional distress.
"Respond safely and empathetically to potential signs of delusion or mania, and pay closer attention to indirect signals of potential self-harm or suicide risk,” the Company stated.
This came after OpenAI released new data showing that a small portion of ChatGPT users may be experiencing mental health emergencies.
According to OpenAI, about 0.07 per cent of active users per week show possible signs of mental health struggles, while 0.15 per cent have conversations that suggest potential suicidal planning or intent.
It stated that the percentage may appear small, though it translates to hundreds of thousands of people since ChatGPT currently records around 800 million weekly users globally.
According to OpenAI, the advisory team has helped in designing ChatGPT's responses to encourage users to seek help in real life by ensuring that it reacts with safety and empathy.
It has also updated the system that will detect indirect signals of self-harm and respond to signs of mania or delusion.
“Going forward, in addition to our longstanding baseline safety metrics for suicide and self-harm, we are adding emotional reliance and non-suicidal mental health emergencies to our standard set of baseline safety testing for future model releases,” part of the statement reads.