OpenAI updated its policy this week to include safety tools that will notify parents and law enforcement if users under the age of 18 engage in conversations of self–harm.