{"id":1179,"date":"2025-10-17T13:05:32","date_gmt":"2025-10-17T13:05:32","guid":{"rendered":"https:\/\/globaltaalenthq.com\/?p=1179"},"modified":"2025-10-20T08:53:33","modified_gmt":"2025-10-20T08:53:33","slug":"meta-adding-ai-chatbot-safety-features-for-teens","status":"publish","type":"post","link":"https:\/\/globaltaalenthq.com\/index.php\/2025\/10\/17\/meta-adding-ai-chatbot-safety-features-for-teens\/","title":{"rendered":"Meta adding AI chatbot safety features for teens"},"content":{"rendered":"
Meta, the parent company of Instagram and Facebook, plans to roll out new safety features for its AI chatbots to help protect teens amid growing concerns<\/a> about the technology\u2019s impact on young users<\/a>.\u00a0<\/p>\n The social media giant announced Friday it will add new parental controls for AI chatbots that will allow parents to turn off their teens\u2019 access to one-on-one chats with AI characters and receive information about the topics their teens are chatting about with the company\u2019s AI products. \u00a0<\/p>\n The new features are set to launch early next year, starting with Instagram. <\/p>\n \u201cWe recognize parents already have a lot on their plates when it comes to navigating the internet safely with their teens, and we\u2019re committed to providing them with helpful tools and resources that make things simpler for them, especially as they think about new technology like AI,\u201d Instagram head Adam Mosseri and Meta\u2019s chief AI officer, Alexandr Wang, wrote in a blog post<\/a>.\u00a0<\/p>\n Parents will be able to block chats with all of Meta\u2019s AI characters or target specific characters, the company noted. The company\u2019s AI assistant will remain available to teens even if the AI characters are disabled. <\/p>\n Meta also highlighted its recently announced PG-13 approach to teen accounts, in which the company will use PG-13 movie ratings to guide the content that teens see by default on its platforms.<\/p>\n The tech firm noted that its AI characters are designed not to engage young users in discussions of suicide, self-harm or disordered eating and direct them to resources if necessary. <\/p>\n Teens are also only able to interact with a limited set of characters on \u201cage-appropriate topics like education, sports, and hobbies \u2013 not romance or other inappropriate content,\u201d according to Meta. <\/p>\n Meta came under fire<\/a> earlier this year after a policy document featured examples suggesting its AI chatbots could engage children in conversations that are \u201cromantic or sensual.\u201d\u00a0<\/p>\n The company said at the time the examples were erroneous and were ultimately removed.\u00a0<\/p>\n AI chatbots across the board have faced scrutiny in recent months. The family of a California teenager sued OpenAI in August, accusing ChatGPT of encouraging their son to take his own life. <\/p>\n The father, Matthew Raine, was one of several parents who testified<\/a> before a Senate panel last month that AI chatbots drove their children to suicide or self-harm and urged lawmakers to set guardrails on the new technology.\u00a0<\/p>\n