Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
We want teens to have safe, positive experiences on Facebook and Instagram, which includes helping them explore their interests while showing them content that's appropriate for their age. We work to prevent teens from seeing inappropriate content in three main ways: removing content completely when it breaks our rules, hiding certain types of mature or sensitive content from teens, and having a stricter bar for what we recommend.
We’re continuously evolving our approach to help make sure we’re providing teens with safe, age-appropriate experiences, and to incorporate the best possible research and expert advice to bring parents peace of mind. We also wanted to more closely align to an independent standard that’s already familiar to families, so we reviewed our age-appropriate guidelines - the guidelines that determine what content we hide from teens and avoid recommending to them - against PG-13 movie ratings. This means that teens will see content that’s generally similar to what they’d see in a PG-13 movie.
Just like you might see some suggestive content or hear some strong language in a PG-13 movie, teens may occasionally see something like that on Instagram - but we’re going to keep doing all we can to keep those instances as rare as possible. We recognize no system is perfect, and we’re committed to improving over time.
For Everyone: We remove content that violates our Community Standards. We also have a higher bar for the type of content we recommend.
Our Community Standards outline the kinds of content we don’t allow on Facebook and Instagram. We remove this content completely for everyone - including teens - whenever we become aware of it. When we remove content that breaks these rules, we may also apply a strike to the account that shared it, and we disable accounts that repeatedly or severely violate our policies. These policies are designed to protect everyone in our community, including teens.
Examples of the types of content we remove for everyone:
We don’t allow content that can abuse or exploit people, like posts encouraging non-consensual sexual activity, or sharing or asking for child abuse material. For example, we would remove:
❌ A post offering prostitution or asking someone to send them pornography.
❌ A photo of someone in an intimate or sexual act, shared without their consent.
❌ A post offering or asking for child sexual abuse material or imagery of nude children.
We don’t allow content that encourages, glorifies or promotes suicide, self-injury or eating disorders, while still giving space people to talk about their own experiences and seek support. For example, we would remove:
❌ A post that speaks positively about suicide, self-injury, or eating disorders.
❌ A Story that shows graphic self-injury.
❌ A comment mocking someone for having an eating disorder.
We don’t allow content that could create an environment of intimidation or exclusion, including bullying and harassment. For example, we would remove:
❌ A post using dehumanizing speech against people based on their race, religion, ethnicity, gender identity, or sexual orientation.
❌ A comment mocking the victims of sexual assault.
❌ A post that threatens to release personally identifiable information (like a passport number) or incites harassment towards someone.
We don’t allow buying, selling, or trading certain restricted goods and services on our platforms. We also don’t allow people to promote certain types of substances or provide instructions on how to use them. For example, we would remove:
❌ A Story offering to buy, sell, or trade various types of drugs.
❌ A post looking to buy, sell, or trade tobacco, nicotine, or alcohol
❌ A comment offering 3D-printed gun parts.
We make some exceptions for legitimate businesses that legally offer certain kinds of restricted goods.
We don’t allow extreme graphic content, or content that may pose a threat to personal safety, such as threats of violence against people. For example, we would remove:
❌ A post threatening to kill or kidnap another person, or one encouraging others to commit violence.
❌ A graphic video showing a person being maimed or severely burned.
We make certain exceptions for graphic content in medical contexts or when shared to raise awareness. Where we do allow graphic content, we typically cover it with a warning screen to let people know the content may be sensitive before they click on it.
We restrict the display of nudity or sexual activity as well as sexually-explicit language. For example, we would remove:
❌ An image or video containing nudity or explicit sexual activity, including when generated by AI.
❌ A Story with explicit or graphic descriptions of sexual encounters (unless it’s humorous or satirical).
We understand that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause or for educational or medical reasons. Where appropriate and such intent is clear, we make allowances for this content.
For full details on the content we remove, see our Community Standards.
🛡️ Content We Avoid Recommending
We make recommendations in places like Instagram’s Reels or Explore or Facebook Feed to help people discover new content they may be interested in. We have guidelines about the kind of content that can be recommended, and avoid making recommendations that could be low-quality, objectionable, or particularly sensitive—even when the content isn’t severe enough to remove. This is because we think there should be stricter standards when showing people content from accounts they haven’t chosen to follow.
⚠️ Additional Content We Hide for Teens
Teens: We offer extra protections for teens under 18. We hide more content, beyond what we remove for everyone, and also avoid recommending content to teens that is recommendable to adults. These policies are guided by PG-13 ratings and are designed to help prevent teens from seeing content they wouldn't see in a PG-13 movie. On Instagram, teens or their parents can also opt in to a more restrictive setting called Limited Content, and teens can only access a third, more permissive setting - called More Content - with a parent's permission
13+ (the default): The 13+ content setting is guided by PG-13 movie ratings, and is on for teens by default. We’ve worked with youth experts around the world to understand what types of content may be appropriate for adults, but too mature for teens under 18, and we reviewed these age-appropriate guidelines against PG-13 movie ratings. In this setting, we’ll hide additional content from teens beyond what we already remove for everyone. This means, while adults will still have access to this content, teens won’t be able to see or interact with it, even if it’s shared by an account they follow in their Feed or Stories. We’ll also prevent teens from discovering, following and interacting with accounts that primarily share content that’s not age-appropriate.
Click here to see some of the types of content we’ll hide for teens in this default setting
We hide certain suicide and self-injury related content to protect teens from potentially distressing or sensitive material. For example, we would age restrict:
⚠️ A post where someone is describing their own personal experiences with suicide, self-injury or eating disorders, except in the context of recovery.
⚠️ A photo or video showing people in a hospital engaging in euthanasia or assisted suicide.
⚠️ A Story where someone is talking about their own extreme weight loss behavior.
We hide content from teens that could influence them to engage in activities that are potentially harmful. For example, we would age restrict:
⚠️ A post offering to sell tobacco, nicotine products, alcohol, or firearms when shared by a legitimate brick-and-mortar business.
⚠️ A Story encouraging people to take psychedelic drugs (also known as entheogens) or cannabis products.
⚠️ Images with recipes for alcoholic drinks.
⚠️ A post selling weight loss products or services.
We hide most graphic and disturbing imagery from teens, even if we’d allow it behind a warning screen for adults. For example, we would age restrict:
⚠️ A photo of a severely burned person, which we’d cover with a warning screen for adults.
⚠️ A photo or video of shootings, brutality, or deadly car crashes.
We hide additional content from teens that doesn’t contain explicit nudity or sexual activity but could be considered sexually suggestive. For example, we would age restrict:
⚠️ A post with near nudity such as nudity covered by digital overlay.
⚠️ A photo zoomed in on someone’s buttocks.
⚠️ A Story with sexually suggestive language describing sexual encounters, including the use of sexual metaphors.
For full details on the content we remove, see our Community Standards.
We’ll also avoid recommending more types of content, even if it’s in line with PG-13 ratings. A lot of the content we avoid recommending to adults is already hidden completely from teens, but we go further for teens and avoid recommending additional types of content - like photos or videos that may be seen as implicitly sexual, strong swearing, or risky stunts like jumping from heights.
Limited Content: On Instagram, for parents who prefer extra controls, we’ve introduced a new, even more restrictive setting called Limited Content. When the Limited Content setting is turned on, we will apply our content filters more strongly to hide even more content from the Teen Account experience. We’ll also further limit Search results, and turn off teens’ ability to see, leave or receive comments under posts.
More Content: Teens can only switch to the ‘More Content’ setting with a parent’s permission, which parents can only provide via parental supervision. In this setting, teens will still benefit from automatic protections that hide more content from teens than we do for adults, and we will still remove content completely if it violates our Community Standards. However, teens may be recommended some content they wouldn’t be in the 13+ and Limited Content settings, and we won’t hide entire accounts that share age-inappropriate content. The More Content setting won’t provide the stricter content restrictions across Live, Comments, and Search that are part of the 13+ and Limited Content settings.
We work hard to identify content that breaks our rules, and we find most of the content we remove proactively using our technology before it’s reported to us. If you see something we missed, please help make our platforms a safer place by reporting it on either Facebook or Instagram. All reports are anonymous.
Our Community Standards contain the details on the kinds of content we remove and age-restrict for each policy area. To read them, click here.
We’ve also developed technology that proactively identifies potentially suspicious adults, such as an adult who has been repeatedly blocked or reported by teens, or if an adult repeatedly searches for violating content. We won’t recommend suspicious adults to teens, and we won’t recommend teens to suspicious adults. You can read more about this here.
You can read more about the ways we help keep our community safe on our Safety Center, and the ways we support teens and families on our Family Center.