On Tuesday, January 9, 2024, Meta announced its initiative to conceal inappropriate content related to suicide, self-harm, and eating disorders from teenagers’ accounts on Instagram and Facebook. The Menlo Park, California-based social media giant emphasized its commitment to ensuring a safe and age-appropriate digital experience for teen users in a blog post.
Meta clarified that, in addition to its existing efforts to avoid recommending “age-inappropriate” material to teens, it will now prevent such content from appearing in their feeds, even if shared by accounts they follow. The company stated, “We want teens to have safe, age-appropriate experiences on our apps.”
For teen users, subject to accurate age disclosure upon registration, Meta will implement the most restrictive settings on both Instagram and Facebook platforms. These users will be restricted from searching for potentially harmful terms.
Meta acknowledged the complexity of certain topics, such as self-harm struggles, stating, “Take the example of someone posting about their ongoing struggle with thoughts of self-harm. This is an important story, and can help destigmatize these issues, but it’s a complex topic and isn’t necessarily suitable for all young people.” As a response, Meta pledged to remove such content from teens’ experiences on Instagram and Facebook, along with other forms of age-inappropriate content.
This announcement coincides with Meta facing legal challenges from numerous U.S. states, alleging that the company knowingly designed features on Instagram and Facebook that contribute to the mental health crisis among young people.
However, critics argue that Meta’s actions fall short. Josh Golin, executive director of the children’s online advocacy group Fairplay, described Meta’s announcement as a “desperate attempt to avoid regulation” and criticized the delay in implementing changes, questioning why these measures were only announced in 2024.