Meta is under fresh pressure after two current and two former employees provided documents to Congress suggesting the company discouraged research into child safety on its platforms. The disclosures, first reported by The Washington Post, raise questions about how Meta handles internal investigations into sensitive issues such as youth safety, harassment, and discrimination.
Good to Know
The whistleblowers allege that Meta altered policies on researching sensitive subjects—including children’s safety, harassment, politics, and race—shortly after Frances Haugen’s 2021 disclosures revealed Instagram harmed teen girls’ mental health.
According to the report, Meta suggested two options to reduce research “risk”:
Jason Sattizahn, a former VR researcher at Meta, said his supervisor instructed him to delete recordings of an interview where a teen described his 10-year-old brother being sexually propositioned on Horizon Worlds, Meta’s virtual reality app.
A Meta spokesperson responded: “Global privacy regulations make clear that if information from minors under 13 years of age is collected without verifiable parental or guardian consent, it has to be deleted.”
Whistleblowers argue, however, that the broader culture discouraged open conversations about how children under 13 were accessing Meta’s VR platforms.
Meta disputed the narrative, stating: “These few examples are being stitched together to fit a predetermined and false narrative; in reality, since the start of 2022, Meta has approved nearly 180 Reality Labs-related studies on social issues, including youth safety and well-being.”
| Topic | Whistleblower Claim | Meta Response |
| Research Policies | Policies changed after Haugen’s leaks to restrict sensitive studies | Company highlights nearly 180 approved Reality Labs studies since 2022 |
| Youth in Horizon Worlds | Employees discouraged from discussing under-13 use | Meta cites privacy laws requiring deletion of under-13 data |
| VR Harassment | Evidence of children being propositioned deleted at manager’s request | Meta did not directly address, but stressed compliance with regulations |
| Overall Narrative | Company suppresses or downplays research on youth risks | Meta says allegations are isolated and paint a “false narrative” |
In February, Kelly Stonelake, a 15-year Meta veteran, filed a lawsuit echoing similar concerns. She alleged that Horizon Worlds lacked safeguards to prevent underage use, and that racism was rampant: “The leadership team was aware that in one test, it took an average of 34 seconds of entering the platform before users with Black avatars were called racial slurs, including the ‘N-word’ and ‘monkey.’”
Stonelake has also filed another suit against Meta for alleged gender discrimination and sexual harassment.
Meta’s challenges aren’t limited to Horizon Worlds. Reuters reported last month that the company’s AI chatbot guidelines once allowed romantic or sensual conversations with children, intensifying scrutiny over how its products interact with minors.