{"id":335,"date":"2025-09-05T19:49:48","date_gmt":"2025-09-05T19:49:48","guid":{"rendered":"https:\/\/globaltaalenthq.com\/?p=335"},"modified":"2025-09-08T08:50:49","modified_gmt":"2025-09-08T08:50:49","slug":"states-warn-openai-of-serious-concerns-with-chatbot","status":"publish","type":"post","link":"https:\/\/globaltaalenthq.com\/index.php\/2025\/09\/05\/states-warn-openai-of-serious-concerns-with-chatbot\/","title":{"rendered":"States warn OpenAI of 'serious concerns' with chatbot"},"content":{"rendered":"
California and Delaware warned OpenAI on Friday that they have \u201cserious concerns\u201d about the AI company\u2019s safety practices in the wake of several recent deaths reportedly connected to ChatGPT. <\/p>\n
In a letter to the OpenAI board, California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings noted they recently met with the firm\u2019s legal team and \u201cconveyed in the strongest terms that safety is a non-negotiable priority, especially when it comes to children.\u201d <\/p>\n
The pair\u2019s latest missive comes after the family of a 16-year-old boy sued OpenAI<\/a> last Tuesday, alleging ChatGPT encouraged him to take his own life. The Wall Street Journal also reported<\/a> last week that the chatbot fueled a 56-year-old Connecticut man\u2019s paranoia before he killed himself and his mother in August. <\/p>\n \u201cThe recent deaths are unacceptable,\u201d Bonta and Jennings wrote. \u201cThey have rightly shaken the American public\u2019s confidence in OpenAI and this industry.\u201d <\/p>\n \u201cOpenAI \u2013 and the AI industry \u2013 must proactively and transparently ensure AI\u2019s safe deployment,\u201d they continued. \u201cDoing so is mandated by OpenAI\u2019s charitable mission, and will be required and enforced by our respective offices.\u201d <\/p>\n The state attorneys general underscored the need to center safety as they continue discussions with the company about its restructuring plans. <\/p>\n \u201cIt is our shared view that OpenAI and the industry at large are not where they need to be in ensuring safety in AI products\u2019 development and deployment,\u201d Bonta and Jennings said.<\/p>\n “As we continue our dialogue related to OpenAI\u2019s recapitalization plan, we must work to accelerate and amplify safety as a governing force in the future of this powerful technology,” they added.<\/p>\n OpenAI, which is based in California and incorporated in Delaware, previously has engaged with the pair on its efforts to alter the company\u2019s corporate structure. <\/p>\n It initially announced plans to fully transition the firm into a for-profit company without nonprofit oversight in December. However, it later walked back the push<\/a>, agreeing to keep the nonprofit in charge, citing discussions with the attorneys general and other leaders. <\/p>\n In the wake of recent reports about ChatGPT-connected deaths, OpenAI announced Tuesday it was adjusting how its chatbots respond to people in crisis and enacting stronger protections for teens. <\/p>\n Bret Taylor, chair of the OpenAI board, said in a statement Friday that the company is fully committed to addressing the concerns raised by the attorneys general.<\/p>\n \u201cWe are heartbroken by these tragedies and our deepest sympathies are with the families,\u201d he said. \u201cSafety is our highest priority and we\u2019re working closely with policymakers around the world.\u201d<\/p>\n \u201cWe remain committed to learning and acting with urgency to ensure our tools are helpful and safe for everyone, especially young people,\u201d Taylor added. \u201cTo that end, we will continue to have these important discussions with the Attorneys General so we have the benefit of their input moving forward.\u201d<\/p>\n OpenAI is not the only tech company under fire lately over its AI chatbots. Reuters reported<\/a> last month that a Meta policy document featured examples suggesting its chatbots could engage in \u201cromantic or sensual\u201d conversations with children. <\/p>\n