Gaming News
| Published On Sep 1, 2025 7:45 am CEST | By Jenny Patel

Meta Updates AI Chatbot Rules After Child Safety Concerns

Share

Meta has come under pressure to fix its AI chatbot policies after recent reports revealed serious risks for children and even celebrities. The company has now confirmed new safeguards and training measures to limit inappropriate interactions on its platforms.


Good to know

  • Meta chatbots will no longer engage teens on sensitive topics like romance or sex.
  • Teen access will be limited to a smaller set of AI characters.
  • U.S. Senators and Attorneys General are pressing Meta for answers.

The controversy started earlier this month when Reuters published findings that exposed how chatbots on Meta platforms could engage in troubling conversations with minors. The investigation highlighted how the technology lacked clear safety barriers.

Meta’s spokesperson Stephanie Otway addressed the issue in a statement:

“As we continue to refine our systems, we’re adding more guardrails as an extra precaution—including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now. These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI.”

Get 125% / $2,500 on 1st deposit!
New players only. Exclusive Welcome Bonus of up to $2,500
Casino & Sports

Second wave of controversy

The problem worsened when Reuters followed up with another report, this time pointing out how “parody” chatbots were impersonating celebrities. Some of these bots were found sharing sexual content and creating explicit images of stars like Taylor Swift, Selena Gomez, Scarlett Johansson, and Anne Hathaway. Even a chatbot mimicking 16-year-old actor Walker Scobell was discovered.

While many of these bots were created by users, Reuters found at least a few that were built by a Meta employee—including one impersonating Swift and another pretending to be Formula One driver Lewis Hamilton. Meta later removed those accounts.

The situation drew a sharp response from the National Association of Attorneys General, which issued a letter saying:

Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine.”

177% up to 5BTC + 77 Free Spins!
New players only. Exclusive Welcome Bonus of 177% + 77 Free Spins
Casino

SAG-AFTRA, the U.S. union that represents actors and media professionals, also weighed in. Duncan Crabtree-Ireland, its national executive director, told Reuters:

“If a chatbot is using the image of a person and the words of the person, it’s readily apparent how that could go wrong.”

The union has long pushed for stronger rules to protect members against unauthorized AI use, and the new revelations only underline their concerns.

A bigger problem for AI platforms

Meta is not the only company facing tough questions, but its position as the operator of Facebook, Instagram, and WhatsApp means that problems with its chatbots quickly become global. With regulators in Washington and state attorneys general now investigating, the company is under mounting pressure to show it can balance innovation with safety.

Jenny Patel

Jenny Patel, a dedicated freelance writer, has been consumed by her love for gaming since her childhood days. Her go-to games growing up were Elder Scrolls V: Skyrim on PC and Halo 3 on XBOX. Jenny now enjoys the flexibility of working remotely, allowing her to explore the world while indulging in her gaming passion.

Tags: AIMeta