Instagram to show PG-13 content by default to teens, adds more parental controls

In an effort to protect its underage users from harmful content, Instagram is rolling out new restrictions to teen accounts. Users under 18 will now by default only see content that adheres to PG-13 movie ratings, avoiding themes like extreme violence, sexual nudity, and graphic drug use.

Users under 18 won’t be able to change this setting without their parents or guardians’ explicit approval.

Instagram is also introducing a stricter content filter, called Limited Content, that will prevent teens from seeing and posting comments on posts that have the setting turned on.

The company said starting next year, it will apply more restrictions to the kinds of chats teens can have with AI bots that have the Limited Content filter turned on. It is already applying the new PG-13 content settings to AI conversations.

Image Credits: Instagram

This move as chatbot makers like OpenAI and Character.AI are being taken to court for allegedly causing harm to users. Last month, OpenAI rolled out new restrictions for ChatGPT users under 18, and said it is training the chatbot to refrain from “flirtatious talk.” Earlier this year, Character.AI also added new limits and parental controls.

Instagram, which has been building tools related to teen safety across accounts, DMs, search, and content, is expanding controls and restrictions in various areas for underage users. The social media service will not allow teenagers to follow accounts that share age-inappropriate content, and if they follow such accounts, they won’t be able to see content from these accounts or interact with them, and vice versa. The company is also removing such accounts from recommendations, making them harder to find.

Image Credits: Instagram

The company is also blocking teenagers from viewing inappropriate content linked to them in DMs.

Techcrunch event

San Francisco
|
October 27-29, 2025

Meta already restricts teen accounts from discovering content related to eating disorders and self-harm. The company is now blocking words like “alcohol” or “gore,” and says it is also making sure that teens don’t find content under these categories by misspelling these terms.

Image Credit: Instagram

The company said it is testing a new way for parents to flag content that shouldn’t be recommended to teens using supervision tools. Flagged posts will be sent to a review team.

Instagram is rolling out these changes in the U.S., U.K., Australia, and Canada starting today, and globally next year.

Source link

Leave a Comment