EG
The Express Gazette
Sunday, November 9, 2025

Meta to block AI chatbots from discussing suicide and self-harm with teens

Company to add guardrails and direct teens to expert resources after leaked notes and a U.S. Senate probe raised safety concerns

Technology & AI 2 months ago

Meta said Friday it will introduce additional guardrails to its artificial intelligence chatbots that will prevent them from engaging teens in conversations about suicide, self-harm and eating disorders, and will instead direct young users to expert resources.

The decision follows publication of an internal document obtained by Reuters that contained notes suggesting some AI products could have inappropriate interactions with teenagers, and comes about two weeks after a U.S. senator opened an investigation into the company. Meta called the notes in the document "erroneous and inconsistent with its policies," which prohibit any content that sexualizes children.

"We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating," a Meta spokesperson said. The company told technology news site TechCrunch it would add further safeguards "as an extra precaution" and temporarily limit the chatbots that teens can interact with while the changes are implemented.

Under the new approach, teenagers who raise issues related to suicide, self-harm or disordered eating in conversations with Meta's AI systems will be steered to qualified help and resources rather than receiving an extended conversational response from the chatbot. Meta did not disclose a detailed timeline for deploying the new guardrails or specify which products would be affected.

The move follows increasing scrutiny of generative AI and conversational agents from lawmakers, child safety advocates and journalists who have raised concerns about potential harms when machine-learning systems interact with minors on sensitive topics. The leaked internal notes and the ensuing media coverage prompted the U.S. Senate inquiry into whether Meta's AI products comply with laws and company policies intended to protect children.

Meta's statement emphasized that protections for teen users were part of its original design for AI products, and described the additional steps as precautionary. Tech industry observers said the episode highlights ongoing tensions between rapid AI development and the need for robust safety measures, particularly when systems encounter vulnerable populations.

The company faces parallel pressures from regulators and advocacy groups to demonstrate that its AI systems can be deployed without causing harm. Lawmakers in the United States and elsewhere have increasingly called for transparency, auditing and stronger safety standards for AI tools, while civil-society groups have urged tech platforms to prioritize child protection.

Meta did not provide details about the internal review it is conducting or how it will audit chatbot interactions to ensure compliance with the new guardrails. It also did not say whether the changes would be applied to other age groups or expanded to cover additional sensitive topics. The U.S. senator who opened the investigation has not issued a public comment since Meta's announcement.

The company historically has balanced product development with content-moderation policies intended to prevent harm, but critics say enforcement and technical safeguards have not always kept pace with new AI capabilities. Meta's latest step to route teens to expert resources rather than permitting open-ended chatbot engagement on life-threatening or self-harm topics represents a tightening of that balance amid heightened scrutiny.

Meta said it will continue to review and refine its safety approaches as it rolls out the additional protections. The broader inquiry into the company's AI practices and the senator's investigation remain underway, and further details about implementation and oversight are expected as those processes proceed.