The federal lawsuit examining social media safety is focusing on how long Meta took to introduce protections for teens on Instagram, including tools designed to limit exposure to explicit images in private messages.
Good to Know
During a newly unsealed deposition, Instagram head Adam Mosseri answered questions about a 2018 email exchange discussing harmful activity that could occur through direct messaging. Lawyers argued that risks were known long before product changes arrived.
Mosseri acknowledged the possibility of inappropriate content being shared but rejected suggestions that Instagram should have warned parents more directly, noting that similar risks exist across messaging services. He said company decisions involved balancing user privacy with safety measures.
Testimony included survey data showing 19.2 percent of users aged 13 to 15 reported seeing unwanted sexual imagery on Instagram, while 8.4 percent said they encountered content involving self harm during a recent week of app use.
Plaintiffs are concentrating on the timing of safeguards rather than current features. Legal teams pointed to earlier internal discussions and messages referencing concerns about harmful interactions involving minors, including grooming scenarios.
Meta spokesperson Liza Crenshaw said:
“for over a decade, we’ve listened to parents, worked with experts and law enforcement, and conducted in-depth research to understand the issues that matter most. We use these insights to make meaningful changes—like introducing Teen Accounts with built-in protections and providing parents with tools to manage their teens’ experiences. We’re proud of the progress we’ve made, and we’re always working to do better.”
The case, filed in the U.S. District Court for the Northern District of California, forms part of wider legal efforts alleging that major platforms are designed to maximize engagement in ways that may encourage compulsive use among younger audiences. Defendants include Meta, Snap, TikTok, and YouTube.