OpenAI has announced a new feature called Trusted Contact, designed to enhance safety measures within the ChatGPT bot by alerting a trusted contact if the system detects signs of self-harm or suicide during conversations.
This feature allows adult users to select a trusted contact in their account settings, such as a family member or close friend. When a conversation veers toward thoughts of self-harm, ChatGPT encourages the user to contact this trusted contact, while simultaneously sending an automated alert to the designated person urging them to check on the user.
This move comes as the company faces a series of lawsuits filed by families of people who committed suicide after using its chatbot. Some of the lawsuits allege that ChatGPT encouraged or helped victims plan their suicides.
The company explained that it currently relies on a combination of automated systems and human review to monitor conversations that may pose a risk to users. When indicators of suicidal tendencies are detected, the system forwards the alert to the company's human safety team for review.